report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
In their efforts to modernize their health information systems and share medical information, VA and DOD begin from different positions. As shown in table 1, VA has one integrated medical information system, VistA (Veterans Health Information Systems and Technology Architecture), which uses all electronic records. All 128 VA medical sites thus have access to all VistA information. (Table 1 also shows, for completeness, VA’s planned modernized system and its associated data repository.) In contrast, DOD has multiple medical information systems (see table 2). DOD’s various systems are not integrated, and its 138 sites do not necessarily communicate with each other. In addition, not all of DOD’s medical information is electronic: some records are paper- based. For almost a decade, VA and DOD have been pursuing ways to share data in their health information systems and create comprehensive electronic records. However, the departments have faced considerable challenges, leading to repeated changes in the focus of their initiatives and target dates for accomplishment. As shown in figure 1, the departments’ efforts have involved a number of distinct initiatives, both long-term initiatives to develop future modernized solutions, and short-term initiatives to respond to more immediate needs to share information in existing systems. As the figure shows, these initiatives often proceeded in parallel. The departments’ first initiative, known as the Government Computer-Based Patient Record (GCPR) project, aimed to develop an electronic interface that would let physicians and other authorized users at VA and DOD health facilities access data from each other’s health information systems. The interface was expected to compile requested patient information in a virtual record (that is, electronic as opposed to paper) that could be displayed on a user’s computer screen. In 2001 and 2002, we reviewed the GCPR project and noted disappointing progress, exacerbated in large part by inadequate accountability and poor planning and oversight, which raised doubts about the departments’ ability to achieve a virtual medical record. We determined that the lack of a lead entity, clear mission, and detailed planning to achieve that mission made it difficult to monitor progress, identify project risks, and develop appropriate contingency plans. We made recommendations in both years that the departments enhance the project’s overall management and accountability. In particular, we recommended that the departments designate a lead entity and a clear line of authority for the project; create comprehensive and coordinated plans that include an agreed- upon mission and clear goals, objectives, and performance measures; revise the project’s original goals and objectives to align with the current strategy; commit the executive support necessary to adequately manage the project; and ensure that it followed sound project management principles. In response, the two departments revised their strategy in July 2002, refocusing the project and dividing it into two initiatives. A short- term initiative (the Federal Health Information Exchange or FHIE) was to enable DOD, when service members left the military, to electronically transfer their health information to VA. VA was designated as the lead entity for implementing FHIE, which was successfully completed in 2004. A longer term initiative was to develop a common health information architecture that would allow the two-way exchange of health information. The common architecture is to include standardized, computable data, communications, security, and high-performance health information systems (these systems, DOD’s CHCS II and VA’s HealtheVet VistA, were already in development, as shown in the figure). The departments’ modernized systems are to store information (in standardized, computable form) in separate data repositories: DOD’s Clinical Data Repository (CDR) and VA’s Health Data Repository (HDR). The two repositories are to exchange information through an interface named CHDR. In March 2004, the departments began to develop the CHDR interface, and they planned to begin implementation by October 2005. However, implementation of the first release of the interface (at one site) occurred in September 2006, almost a year later. In a review in June 2004, we identified a number of management weaknesses that could have contributed to this delay and made a number of recommendations, including creation of a comprehensive and coordinated project management plan. In response, the departments agreed to our recommendations and improved the management of the CHDR program by designating a lead entity with final decision-making authority and establishing a project management structure. As we noted in later testimony, however, the program did not develop a project management plan that would give a detailed description of the technical and managerial processes necessary to satisfy project requirements (including a work breakdown structure and schedule for all development, testing, and implementation tasks), as we had recommended. In October 2004, the two departments established two more short- term initiatives in response to a congressional mandate. These were two demonstration projects: the Laboratory Data Sharing Interface, aimed at allowing VA and DOD facilities to share laboratory resources, and the Bidirectional Health Information Exchange (BHIE), aimed at allowing both departments’ clinicians access to records on shared patients (that is, those who receive care from both departments). As demonstration projects, both initiatives were limited in scope, with the intention of providing interim solutions to the departments’ need for more immediate health information sharing. However, because BHIE provided access to up-to-date information, the departments’ clinicians expressed strong interest in increasing its use. As a result, the departments began planning to broaden BHIE’s capabilities and expand its implementation considerably. Until the departments’ modernized systems are fully developed and implemented, extending BHIE connectivity could provide each department with access to most data in the other’s legacy systems. According to a VA/DOD annual report and program officials, the departments now consider BHIE an interim step in their overall strategy to create a two-way exchange of electronic medical records. Most recently, the departments have announced a further change to their information-sharing strategy. In January 2007, they announced their intention to jointly develop a new inpatient medical record system. According to the departments, adopting this joint solution will facilitate the seamless transition of active-duty service members to veteran status, as well as making inpatient healthcare data on shared patients immediately accessible to both DOD and VA. In addition, the departments consider that a joint development effort could allow them to realize significant cost savings. We have not evaluated the departments’ plans or strategy in this area. Throughout the history of these initiatives, evaluations beyond ours have also found deficiencies in the departments’ efforts, especially with regard to the need for comprehensive planning. For example, in fiscal year 2006, the Congress did not provide all the funding requested for HealtheVet VistA because it did not consider that the funding had been adequately justified. In addition, a recent presidential task force identified the need for VA and DOD to improve their long-term planning. This task force, reporting on gaps in services provided to returning veterans, noted problems with regard to sharing information on wounded service members, including the inability of VA providers to access paper DOD inpatient health records. According to the report, although significant progress has been made on sharing electronic information, more needs to be done. The task force recommended that VA and DOD continue to identify long-term initiatives and define scope and elements of a joint inpatient electronic health record. VA and DOD have made progress in both their long-term and short- term initiatives to share health information. In the long-term project to develop modernized health information systems, the departments have begun to implement the first release of the interface between their modernized data repositories, among other things. The two departments have also made progress in their short-term projects to share information in existing systems, having completed two initiatives and making important progress on another. In addition, the two departments have undertaken ad hoc activities to accelerate the transmission of health information on severely wounded patients from DOD to VA’s four polytrauma centers. However, despite the progress made and the sharing achieved, the tasks remaining to achieve the goal of a shared electronic medical record remain substantial. In their long-term effort to share health information, VA and DOD have completed the development of their modernized data repositories, agreed on standards for various types of data, and begun to populate the repositories with these data. In addition, they have now implemented the first release of the CHDR interface, which links the two departments’ repositories, at seven sites. The first release has enabled the seven sites to share limited medical information: specifically, computable outpatient pharmacy and drug allergy information for shared patients. According to DOD officials, in the third quarter of 2007 the department will send out instructions to its remaining sites so that they can all begin using CHDR. According to VA officials, the interface will be available across the department when necessary software updates are released, which is expected this July. Besides being a milestone in the development of the departments’ modernized systems, the interface implementation provides benefits to the departments’ current systems. Data transmitted by CHDR are permanently stored in the modernized data repositories, CDR and HDR. Once in the repositories, these computable data can be used by DOD and VA at all sites through their existing systems. CHDR also provides terminology mediation (translation of one agency’s terminology into the other’s). VA and DOD plans call for developing the capability to exchange computable laboratory results data through CHDR during fiscal year 2008. Although implementing this interface is an important accomplishment, the departments are still a long way from completion of the modernized health information systems and comprehensive longitudinal health records. While DOD and VA had originally projected completion dates for their modernized systems of 2011 and 2012, respectively, department officials told us that there is currently no scheduled completion date for either system. Further, both departments have still to identify the next types of data to be stored in the repositories. The two departments will then have to populate the repositories with the standardized data, which involves different tasks for each department. Specifically, although VA’s medical records are already electronic, it still has to convert these into the interoperable format appropriate for its repository. DOD, in addition to converting current records from its multiple systems, must also address medical records that are not automated. As pointed out by a recent Army Inspector General’s report, some DOD facilities are having problems with hard-copy records. In the same report, inaccurate and incomplete health data were identified as a problem to be addressed. Before the departments can achieve the long-term goal of seamless sharing of medical information, all these tasks and challenges will have to be addressed. Consequently, it is essential for the departments to develop a comprehensive project plan to guide these efforts to completion, as we have previously recommended. In addition to the long-term effort described above, the two departments have made some progress in meeting immediate needs to share information in their respective legacy systems by setting up short-term projects, as mentioned earlier, which are in various stages of completion. In addition, the departments have set up special processes to transfer data from DOD facilities to VA’s polytrauma centers, which treat traumatic brain injuries and other especially severe injuries. DOD has been using FHIE to transfer information to VA since 2002. According to department officials, over 184 million clinical messages on more than 3.8 million veterans have been transferred to the FHIE data repository as of March 2007. Data elements transferred are laboratory results, radiology results, outpatient pharmacy data, allergy information, consultation reports, elements of the standard ambulatory data record, and demographic data. Further, since July 2005, FHIE has been used to transfer pre- and post-deployment health assessment and reassessment data; as of March 2007, VA has access to data for more than 681,000 separated service members and demobilized Reserve and National Guard members who had been deployed. Transfers are done in batches once a month, or weekly for veterans who have been referred to VA treatment facilities. According to a joint DOD/VA report, FHIE has made a significant contribution to the delivery and continuity of care of separated service members as they transition to veteran status, as well as to the adjudication of disability claims. One of the departments’ demonstration projects, the Laboratory Data Sharing Interface (LDSI), is now fully operational and is deployed when local agencies have a business case for its use and sign an agreement. It requires customization for each locality and is currently deployed at nine locations. LDSI currently supports a variety of chemistry and hematology tests, and work is under way to include microbiology and anatomic pathology. Once LDSI is implemented at a facility, the only nonautomated action needed for a laboratory test is transporting the specimens. If a test is not performed at a VA or DOD doctor’s home facility, the doctor can order the test, the order is transmitted electronically to the appropriate lab (the other department’s facility or in some cases a local commercial lab), and the results are returned electronically. Among the benefits of LDSI, according to VA and DOD, are increased speed in receiving laboratory results and decreased errors from manual entry of orders. The LDSI project manager in San Antonio stated that another benefit of the project is the time saved by eliminating the need to rekey orders at processing labs to input the information into the laboratories’ systems. Additionally, the San Antonio VA facility no longer has to contract out some of its laboratory work to private companies, but instead uses the DOD laboratory. Developed under a second demonstration project, the BHIE interface is now available throughout VA and partially deployed at DOD. It is currently deployed at 25 DOD sites, providing access to 15 medical centers, 18 hospitals, and over 190 outpatient clinics associated with these sites. DOD plans to make current BHIE capabilities available departmentwide by June 2007. The interface permits a medical care provider to query patient data from all VA sites and any DOD site where it is installed and to view that data onscreen almost immediately. It not only allows DOD and VA to view each other’s information, it also allows DOD sites to see previously inaccessible data at other DOD sites. As initially developed, the BHIE interface provides access to information in VA’s VistA and DOD’s CHCS, but it is currently being expanded to query data in other DOD databases (in addition to CHCS). In particular, DOD has developed an interface to the Clinical Information System (CIS), an inpatient system used by many DOD facilities, which will provide bidirectional views of discharge summaries. The BHIE-CIS interface is currently deployed at five DOD sites and planned for eight others. Further, interfaces to two additional systems are planned for June and July 2007: An interface to DOD’s modernized data repository, CDR, will give access to outpatient data from combat theaters. An interface to another DOD database, the Theater Medical Data Store, will give access to inpatient information from combat theaters. The departments also plan to make more data elements available. Currently, BHIE enables text-only viewing of patient identification, outpatient pharmacy, microbiology, cytology, radiology, laboratory orders, and allergy data from its interface with DOD’s CHCS. Where it interfaces with CIS, it also allows viewing of discharge summaries from VA and the five DOD sites. DOD staff told us that in early fiscal year 2008, they plan to add provider notes, procedures, and problem lists. Later in fiscal year 2008, they plan to add vital signs, scanned images and documents, family history, social history, and other history questionnaires. In addition, at the VA/DOD site in El Paso, a trial is under way of a process for exchanging radiological images using the BHIE/FHIE infrastructure. Some images have successfully been exchanged. Through their efforts on these long- and near-term initiatives, VA and DOD are achieving exchanges of various types of health information (see attachment 1 for a summary of all the types of data currently being shared and those planned for the future, as well as cost data on the initiatives). However, these exchanges are as yet limited, and significant work remains to be done to expand the data shared and integrate the various initiatives. In addition to the information technology initiatives described, DOD and VA have set up special activities to transfer medical information to VA’s four polytrauma centers, which are treating active-duty service members severely wounded in combat. Polytrauma centers care for veterans and returning service members with injuries to more than one physical region or organ system, one of which may be life threatening, and which results in physical, cognitive, psychological, or psychosocial impairments and functional disability. Some examples of polytrauma include traumatic brain injury (TBI), amputations, and loss of hearing or vision. When service members are seriously injured in a combat theater overseas, they are first treated locally. They are then generally evacuated to Landstuhl Medical Center in Germany, after which they are transferred to a military treatment facility in the United States, usually Walter Reed Army Medical Center in Washington, D.C.; the National Naval Medical Center in Bethesda, Maryland; or Brooke Army Medical Center, at Fort Sam Houston, Texas. From these facilities, service members suffering from polytrauma may be transferred to one of VA’s four polytrauma centers for treatment. At each of these locations, the injured service members will accumulate medical records, in addition to medical records already in existence before they were injured. However, the DOD medical information is currently collected in many different systems and is not easily accessible to VA polytrauma centers. Specifically: 1. In the combat theater, electronic medical information may be collected for a variety of reasons, including routine outpatient care, as well as serious injuries. These data are stored in the Theater Medical Data Store, which can be accessed by unit commanders and others. (As mentioned earlier, the departments have plans to develop a BHIE interface to this system by July 2007. Until then, VA cannot access these data.) In addition, both inpatient and outpatient medical data for patients who are evacuated are entered into the Joint Patient Tracking Application. (A few VA polytrauma center staff have been given access to this application.) 2. At Landstuhl, inpatient medical records are paper-based (except for discharge summaries). The paper records are sent with a patient as the individual is transferred for treatment in the United States. 3. At the DOD treatment facility (Walter Reed, Bethesda, or Brooke), additional information will be recorded in CIS and CHCS/CDR. When service members are transferred to a VA polytrauma center, VA and DOD have several ad hoc processes in place to electronically transfer the patients’ medical information: ● DOD has set up secure links to enable a limited number of clinicians at the polytrauma centers to log directly into CIS at Walter Reed and Bethesda Naval Hospital to access patient data. ● Staff at Walter Reed collect paper records, print records from CIS, scan all these, and transmit the scanned data to three of the four polytrauma centers. DOD staff said that they are working on establishing this capability at the Brooke and Bethesda medical centers, as well as the fourth VA polytrauma center. According to VA staff, although the initiative began several months ago, it has only recently begun running smoothly as the contractor became more skilled at assembling the records. DOD staff also pointed out that this laborious process is feasible only because the number of polytrauma patients is small (about 350 in all to date); it would not be practical on a large scale. ● Staff at Walter Reed and Bethesda are transmitting radiology images electronically to three polytrauma centers. (A fourth has this capability, but at this time no radiology images have been transferred there.) Access to radiology images is a high priority for polytrauma center doctors, but like scanning paper records, transmitting these images requires manual intervention: when each image is received at VA, it must be individually uploaded to VistA’s imagery viewing capability. This process would not be practical for large volumes of images. ● VA has access to outpatient data (via BHIE) from 25 DOD sites, including Landstuhl. Although these various efforts to transfer medical information on seriously wounded patients are working, and the departments are to be commended on their efforts, the multiple processes and laborious manual tasks illustrate the effects of the lack of in health information systems and the difficulties of exchanging information in their absence. In conclusion, through the long- and short-term initiatives described, as well as efforts such as those at the polytrauma centers, VA and DOD are achieving exchanges of health information. However, the exchanges are as yet limited, and significant work remains to be done to fully achieve the goal of exchanging interoperable, computable data, including agreeing to standards for the rem aining categories of medical information, populating the data repositories with all this information, completing the development of HealtheVet VistA and AHLTA, and transitioning from the legacy systems. To complete these tasks, a detailed project management plan continu to be of vital importance to the ultimate success of the effort to develop a lifelong virtual medical record. We have previously recommended that the departments develop a clearly defined project management plan that describes the technical and managerial processes necessary to satisfy project requirem including a work breakdown structure and schedule for all development, testing, and implementation tasks. Without a p sufficient detail, VA and DOD increase the risk that the long-time project will not deliver the planned capabilities in the time and at the cost expected. Further, it is not clear how all the initiatives we have described today are to be incorporated into an overall strategytoward achieving the departments’ goal of comprehensive, seamless exchange of health information. Mr. Chairman, this concludes my statement. I would be happy to respond to any questions that you or other members of the subcommittee may have. If you have any questions concerning this testimony, please contact Valerie C. Melvin, Director, Human Capital and Management Information Systems Issues, at (202) 512-6304 or melvinv@gao.gov. Other individuals who made key contributions to this testimony include Barbara Oliver, Assistant Director; Barbara Collier; and Glenn Spiegel. Table 3 summarizes the types of health data currently shared through the long- and near-term initiatives we have described, as well as types of data that are currently planned for addition. While this gives some indication of the scale of the tasks involved in sharing medical information, it does not depict the full extent of information that is currently being captured in health information systems and that remains to be addressed. Table 4 shows costs expended on these information sharing initiatives since their inception. Computer-Based Patient Records: Better Planning and Oversight by VA, DOD, and IHS Would Enhance Health Data Sharing. GAO- 01-459. Washington, D.C.: April 30, 2001. Veterans Affairs: Sustained Management Attention Is Key to Achieving Information Technology Results. GAO-02-703. Washington, D.C.: June 12, 2002. Computer-Based Patient Records: Short-Term Progress Made, but Much Work Remains to Achieve a Two-Way Data Exchange Between VA and DOD Health Systems. GAO-04-271T. Washington, D.C.: November 19, 2003. Computer-Based Patient Records: Sound Planning and Project Management Are Needed to Achieve a Two-Way Exchange of VA and DOD Health Data. GAO-04-402T. Washington, D.C.: March 17, 2004. Computer-Based Patient Records: VA and DOD Efforts to Exchange Health Data Could Benefit from Improved Planning and Project Management. GAO-04-687. Washington, D.C.: June 7, 2004. Computer-Based Patient Records: VA and DOD Made Progress, but Much Work Remains to Fully Share Medical Information. GAO-05- 1051T. Washington, D.C.: September 28, 2005. Information Technology: VA and DOD Face Challenges in Completing Key Efforts. GAO-06-905T. Washington, D.C.: June 22, 2006. DOD and VA Exchange of Computable Pharmacy Data. GAO-07- 554R. Washington, D.C.: April 30, 2007. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Veterans Affairs (VA) and the Department of Defense (DOD) are engaged in ongoing efforts to share medical information, which is important in helping to ensure high-quality health care for active-duty military personnel and veterans. These efforts include a long-term program to develop modernized health information systems based on computable data: that is, data in a format that a computer application can act on--for example, to provide alerts to clinicians of drug allergies. In addition, the departments are engaged in near-term initiatives involving existing systems. GAO was asked to testify on the history and current status of these long- and near-term efforts to share health information. To develop this testimony, GAO reviewed its previous work, analyzed documents, and interviewed VA and DOD officials about current status and future plans. For almost a decade, VA and DOD have been pursuing ways to share health information and create comprehensive electronic medical records. However, they have faced considerable challenges in these efforts, leading to repeated changes in the focus of their initiatives and target dates. Currently, the two departments are pursuing both long- and short-term initiatives to share health information. Under their long-term initiative, the modern health information systems being developed by each department are to share standardized computable data through an interface between data repositories associated with each system. The repositories have now been developed, and the departments have begun to populate them with limited types of health information. In addition, the interface between the repositories has been implemented at seven VA and DOD sites, allowing computable outpatient pharmacy and drug allergy data to be exchanged. Implementing this interface is a milestone toward the departments' long-term goal, but more remains to be done. Besides extending the current capability throughout VA and DOD, the departments must still agree to standards for the remaining categories of medical information, populate the data repositories with this information, complete the development of the two modernized health information systems, and transition from their existing systems. While pursuing their long-term effort to develop modernized systems, the two departments have also been working to share information in their existing systems. Among various near-term initiatives are a completed effort to allow the one-way transfer of health information from DOD to VA when service members leave the military, as well as ongoing demonstration projects to exchange limited data at selected sites. One of these projects, building on the one-way transfer capability, developed an interface between certain existing systems that allows a two-way view of current data on patients receiving care from both departments. VA and DOD are now working to link other systems via this interface and extend its capabilities. The departments have also established ad hoc processes to meet the immediate need to provide data on severely wounded service members to VA's polytrauma centers, which specialize in treating such patients. These processes include manual workarounds (such as scanning paper records) that are generally feasible only because the number of polytrauma patients is small. These multiple initiatives and ad hoc processes highlight the need for continued efforts to integrate information systems and automate information exchange. In addition, it is not clear how all the initiatives are to be incorporated into an overall strategy focused on achieving the departments' goal of comprehensive, seamless exchange of health information.
In response to concerns about the lack of a coordinated federal approach to disaster relief, President Carter established FEMA by Executive Order in 1979 to consolidate and coordinate emergency management functions in one location. In 2003, FEMA became a component of the Emergency Preparedness and Response (EP&R) Directorate in the newly created DHS. Much like its FEMA predecessor, EP&R’s mission was to help the nation to prepare for, mitigate the effects of, respond to, and recover from disasters. While FEMA moved intact to DHS and most of its operations became part of the EP&R Directorate, some of its functions were moved to other organizations within DHS. In addition, functions that were formerly part of other agencies were incorporated into the new EP&R organization. After FEMA moved into DHS it was reorganized numerous times. FEMA’s preparedness functions were transferred over 2 years to other entities in DHS, reducing its mission responsibilities. However, recent legislation transferred many preparedness functions back to FEMA. Today, once again, FEMA’s charge is to lead the nation’s efforts to prepare for, protect against, respond to, recover from, and mitigate the risk of natural disasters, acts of terrorism, and other man-made disasters, including catastrophic incidents. The Robert T. Stafford Disaster Relief and Emergency Assistance Act (Stafford Act), establishes the process for states to request a presidential disaster declaration. The Stafford Act requires the governor of the affected state to request a declaration by the President. In this request the governor must affirm that the situation is of such severity and magnitude that effective response is beyond the capabilities of the state and the affected local governments and that federal assistance is necessary. Before a governor asks for disaster assistance, federal, state, and local officials normally conduct a joint preliminary damage assessment. FEMA is responsible for recommending to the President whether to declare a disaster and trigger the availability of funds as provided for in the Stafford Act. When an obviously severe or catastrophic event occurs, a disaster may be declared before the preliminary damage assessment is completed. In response to a governor’s request, the President may declare that a major disaster or emergency exists. This declaration activates numerous assistance programs from FEMA and may also trigger programs operated by other federal agencies, such as the Departments of Agriculture, Labor, Health and Human Services, and Housing and Urban Development, as well as the Small Business Administration to assist a state in its response and recovery efforts. FEMA can also issue task orders—called mission assignments—directing other federal agencies and DHS components, or “performing agencies,” to perform work on its behalf to respond to a major disaster. The federal disaster assistance provided under a major disaster declaration has no overall dollar limit. However, each of FEMA’s assistance programs has limits either in the form of federal-state cost share provisions or funding caps. FEMA provides assistance primarily through one or more of the following three grant programs: Public Assistance provides aid to state government agencies; local governments; Indian tribes, authorized tribal organizations, and Alaskan Native villages; and private nonprofit organizations or institutions that provide certain services otherwise performed by a government agency. Assistance is provided for projects such as debris removal, emergency protective measures to preserve life and property, and repair and replacement of damaged structures, such as buildings, utilities, roads and bridges, recreational facilities, and water-control facilities (e.g., dikes and levees). Individual Assistance provides for the necessary expenses and serious needs of disaster victims that cannot be met through insurance or low- interest Small Business Administration loans. FEMA provides temporary housing assistance to individuals whose homes are unlivable because of a disaster. Other available services include unemployment compensation and crisis counseling to help relieve any grieving, stress, or mental health problems caused or aggravated by the disaster or its aftermath. FEMA can cover a percentage of the medical, dental, and funeral expenses that are incurred as a result of a disaster. The Hazard Mitigation Grant Program provides additional funding (7.5 to 15 percent of total federal aid for recovery from the disaster) to states and Indian tribal governments to assist communities in implementing long- term measures to help reduce the potential risk of future damages to facilities. Not all programs are activated for every disaster. The determination to activate a program is based on the needs identified during the joint preliminary damage assessment. For instance, some declarations may provide only Individual Assistance grants and others only Public Assistance grants. Hazard Mitigation grants, on the other hand, are available for most declarations. Once a federal disaster is declared, the President appoints a federal coordinating officer to make an appraisal of the types of relief needed, coordinate the administration of this relief, and assist citizens and public officials in obtaining assistance. In addition, the federal coordinating officer establishes a joint field office at or near the disaster site. This office is generally staffed with a crew made up of permanent, full-time FEMA employees; a cadre of temporary reserve staff, also referred to as disaster assistance employees; and the state’s emergency management personnel. Public Law No. 110-28, the U.S. Troop Readiness, Veterans’ Care, Katrina Recovery, and Iraq Accountability Appropriations Act, 2007, directs us to review how FEMA develops its estimates of the funds needed to respond to any given disaster, as described in House Report No. 110-60. Accordingly, we addressed the following questions: (1) What is FEMA’s process for developing and refining its cost estimates for any given disaster? (2) From 2000 through 2006, how close have cost estimates been to the actual costs for noncatastrophic natural disasters? (3) Given the findings from the first two questions and our relevant past work, what steps has FEMA taken to learn from past experience and improve its management of disaster-related resources and what other opportunities exist? To address the first question, we examined FEMA policies, regulations, and other documents that govern its estimation processes. We interviewed senior staff from FEMA’s Office of the Chief Financial Officer, as well as headquarters and regional personnel responsible for FEMA’s disaster assistance programs (Public Assistance, Individual Assistance, and the Hazard Mitigation Grant Program). Although we looked at how the estimates from other federal, state, and local government and private nonprofit organizations feed into FEMA’s process, we did not review the estimating processes of these entities. Also, we did not review whether FEMA implemented its cost estimation processes as described. To address the second question, we compared FEMA’s cost estimates at various points in time (initial; 1, 2, 3, 6 months; and a year) to actual costs to determine when estimates reasonably predicted actual costs. FEMA officials defined “reasonable” as within 10 percent of actual costs. Although the total number of disaster declarations from 2000 through 2006 was 363, we focused on noncatastrophic, natural disasters. Two of the 363 disaster declarations were not natural—they were related to the terrorist attacks of 9/11—and another 14 were considered catastrophic. Of the remaining 347 disaster declarations, 83 (24 percent) had actual or close to actual costs—known as reconciled or closed, respectively—that could be compared to earlier estimates. None of these 83 disaster declarations occurred in 2005 or 2006. Although the analysis of these 83 disaster declarations is informative, it is not generalizable to all declarations as it does not represent the general population of disasters. Finally, to assess the reliability of FEMA’s estimate data, we reviewed the data FEMA officials provided and discussed data quality control procedures with them. We determined that the data were sufficiently reliable for purposes of this report. To address the third question of how FEMA has improved its management of disaster-related resources and identify other opportunities for improvement, we reviewed available policies, procedures, and training materials for staff involved in developing disaster cost estimates or the management of disaster-related resources. In addition, we reviewed our earlier work that identified areas for improvement and discussed FEMA’s related management issues with DHS’s Deputy Inspector General for Disaster Assistance Oversight. We interviewed staff in FEMA’s Office of the Chief Financial Officer and OMB to learn more about FEMA’s planning for annual and supplemental requests for disaster-related resources. Finally, the work we did to address questions one and two provided valuable insights on other opportunities for FEMA to improve its management of disaster-related resources. Once a major disaster has been declared, FEMA staff deployed to the joint field office, along with state and local officials and other relevant parties (e.g., private nonprofit organizations, other federal agencies, etc.), develop and refine cost estimates for each type of assistance authorized in the disaster declaration. According to FEMA officials, these estimates build upon and refine those contained in the preliminary damage assessment. They said that the estimates contained in the preliminary damage assessment are “rough” and are used primarily to ensure that the damage is of a severity and magnitude that the state requires federal assistance. FEMA officials said that while the joint field office is open FEMA program and financial management staff work on a continuing basis to refine these estimates. Staff provide these estimates to a disaster comptroller, who enters them into the Disaster Projection Report (DPR), which compiles and calculates the overall estimate. The disaster comptroller reports the estimates (via the DPR) to both the responsible regional office and the Disaster Relief Fund Oversight Branch within FEMA’s Office of the Chief Financial Officer. The first DPR is provided to these two entities within 1 week of the joint field office opening; updates are reported at least monthly or when large changes occur in the underlying estimates. However, regional office staff only enter updated estimates into the Disaster Financial Status Report (DFSR)—FEMA’s central database for disaster costs—on a monthly basis. After the joint field office is closed, the responsible regional office updates estimates for the given disaster along with all others within its jurisdiction. Regional office program staff (i.e., staff in Public Assistance, Individual Assistance, and the Hazard Mitigation Grant Program) provide updated estimates for all ongoing declared disasters for monthly DFSR reporting. How this information is entered into the DFSR database varies by region; in some regional offices program staff update the estimates for their programs’ costs (e.g., Public Assistance) directly into DFSR, whereas in other regional offices this function is performed by financial management staff, who collect and enter updated disaster estimate data from the program staff. Figure 1 illustrates FEMA’s disaster cost estimation process. FEMA’s overall estimate for any given disaster may cover programmatic and administrative costs in up to five different categories, and the methods for developing these underlying estimates vary. The overall cost estimate for any given disaster could include projected costs for Public Assistance, Individual Assistance, and Hazard Mitigation grants, depending on what type of assistance was authorized in the disaster declaration. In addition, the overall estimate may also cover projected costs for mission assignments—FEMA-issued tasks to other federal agencies or components within DHS, known as performing agencies—as well as administrative costs associated with operating the joint field office and administering disaster assistance. Our review focused on FEMA’s policies and procedures for developing these estimates, as described in related documents and by FEMA officials; we did not review whether these processes were implemented as described. Public Assistance officials said that initial estimates for their program are prepared by category of work and then refined for specific projects. Working with potential applicants following a disaster, program staff will develop overall estimates for Public Assistance costs for each category of emergency and permanent work, as authorized. Costs for Public Assistance are shared between the federal and state governments. The minimum federal share is 75 percent; the President can increase it to 90 percent when a disaster is so extraordinary that it meets or exceeds certain per capita disaster costs, and to 100 percent for emergency work in the initial days after the disaster irrespective of the per capita cost. Later, the overall estimate is refined to reflect the estimates for individual projects. The Public Assistance program uses many methods to develop these estimates. Common methods include time and materials estimates and competitively bid contracts. Public Assistance officials told us that they rely heavily on the applicants’ (state government agencies, local governments, etc.) prior experience and historical knowledge of costs for similar projects. For small projects (those estimated to cost less than $59,700 in fiscal year 2007, adjusted annually), applicants can develop the estimates themselves—FEMA later validates their accuracy through a sample—or they can ask FEMA to develop the estimates. According to a senior Public Assistance official, most applicants choose the latter option. For large projects (estimated to cost more than $59,700 in fiscal year 2007, adjusted annually), Public Assistance staff are responsible for working with applicants to develop project worksheets, which include cost estimates. According to senior program officials, Individual Assistance cost estimates depend on individuals’ needs. Using demographic, historical, and other data specific to the affected area, as well as a national average of costs, Individual Assistance staff project program costs. Depending on the type of Individual Assistance provided, estimates are refined as individuals register and qualify for certain types of assistance or as FEMA and the state negotiate and agree upon costs. For housing and other needs assistance—such as disaster-related medical, dental, and funeral costs— estimates are based on the number of registrations FEMA receives, the rate at which registrants are found eligible for assistance, and the type and amount of assistance for which they qualify. For fiscal year 2007, federal costs for housing assistance were limited to $28,600 per individual or household. This amount is adjusted annually. Other needs assistance is a cost-share program between the federal and state governments with the federal share set at 75 percent of costs. Disaster unemployment assistance is provided to those unemployed because of the disaster and not otherwise covered by regular unemployment insurance programs. The amount provided is based on state law for unemployment insurance in the state where the disaster occurred. The state identifies any need for crisis counseling services and FEMA works with the state mental health agency to develop the estimate for that. Individual Assistance officials also told us that although they set aside $5,000 for legal services FEMA is rarely billed for these services. Hazard Mitigation Grant Program costs are formulaic and based on a sliding scale. If a grantee (state or Indian tribal government) has a standard mitigation plan, the amount FEMA provides to the grantee is a statutorily set percentage of the estimated total amount provided under the major assistance programs. This percentage ranges from 7.5 to 15 percent and is inversely related to the total; that is, when overall assistance estimates are higher, the percentage available for Hazard Mitigation grants decreases. Costs for Hazard Mitigation grants are shared among the federal government, grantees, and applicants (e.g., local governments), with a federal share of up to 75 percent of the grant estimate. FEMA calculates and provides an estimate of Hazard Mitigation funding to grantees 3, 6, and 12 months after a disaster declaration. The 6- month figure is a guaranteed minimum. At 12 months FEMA “locks in” the amount of the 12-month estimate unless the 6-month minimum is greater. Cost estimates for mission assignments are developed jointly by FEMA staff and the performing agencies. Among the information included in a mission assignment are a description of work to be performed, a completion date for the work, an estimate of the dollar amount of the work to be performed, and authorizing signatures. Mission assignments may be issued for a variety of tasks, such as search and rescue missions or debris removal, depending on the performing agencies’ areas of expertise. The signed mission assignment document provides the basis for obligating FEMA’s funds. When federal agencies are tasked with directly providing emergency work and debris removal—known as direct federal assistance mission assignments—costs are shared in the same manner as Public Assistance grants. Estimates for FEMA’s administrative costs are developed by financial management staff in the joint field office. These costs are based on several factors including the number of staff deployed, salary costs, rent for office space, and travel expenses. Although estimates developed in the immediate aftermath of a major disaster are necessarily based on preliminary damage assessments, decision makers need accurate cost information in order to make informed budget choices. FEMA officials told us that by 3 months after a declaration the overall estimate of costs related to any given noncatastrophic natural disaster is usually reasonable, that is, within 10 percent of actual costs. However, as figure 2 illustrates, our analysis of the 83 noncatastrophic natural disaster declarations with actual or close to actual costs shows that on average 3-month estimates were within 23 percent of actual costs and the median difference was around 14 percent. Although the average (mean) difference did not achieve the 10 percent band until approximately 1 year, the median difference reached this band at 6 months. These results, however, cannot be generalized to disaster declarations for which all financial decisions have not been made since we were only able to compare estimates to actual costs for about one-quarter of the noncatastrophic natural disasters declared from 2000 through 2006. From 2000 through 2006, there were 347 noncatastrophic natural disasters. As of June 30, 2007, 83 of these (approximately 24 percent) had actual or near actual costs to which we could compare estimates, as figure 3 illustrates. Fourteen disasters were “reconciled,” meaning that all projects were completed and the FEMA-State Agreement was closed and 69 disasters were “closed,” meaning that financial decisions had been made but all projects were not completed. The rest of the disasters (264) were “programmatically open,” meaning financial decisions were not completed, eligible work remains, and estimates are subject to change. According to FEMA officials, it takes 4 to 5 years to complete all work for an “average” disaster. Time frames for the underlying assistance programs vary. For example, according to a FEMA official, Individual Assistance takes approximately 18 months and Public Assistance 3 years to complete all work. Projects using Hazard Mitigation grants are expected to last 4 years although they can be extended to 6 years. Accurate data permits decision makers to learn from previous experience—both in terms of estimating likely costs to the federal government and in managing disaster assistance programs. However, the way FEMA records disaster information, specifically the way in which it codes the disaster that occurred, inhibits rather than facilitates this learning process. The combination of a single-code limit to describe disasters, inconsistent coding of disasters with similar descriptions, and overlapping codes means that the data are not easily used to inform estimates and other analyses. Such issues mean that we could not compare estimated and actual costs by type of disaster. Moreover, they limit FEMA’s ability to learn from past disasters. Every disaster declaration is coded with an incident type to identify the nature of the disaster (e.g., earthquake, wildfire, etc.). As shown in table 1, there are 27 different incident codes in the DFSR database. We found problems with these data. First, the coding of incident type did not always match the description of the disaster. For example, 31 declarations are coded as tsunamis, but many of these are described—and should be coded—as something else. Second, each disaster declaration can be coded with only one incident type even though most descriptions list multiple types of incidents. We found declarations with similar descriptions coded differently—FEMA has no guidance on how to select the incident type code to be used from among the types of damage. For example, a number of declarations are described as “severe storms and flooding” or “severe storms, flooding, and tornadoes,” but sometimes these were coded as flooding, other times as severe storms, and still other times as tornadoes. Any coding system should be designed for the purpose it must serve. From the point of view of looking at the cause of damage (e.g., water, wind, etc.), many of the 27 incident codes track weather events but do not necessarily capture or elaborate on the type of information relevant to FEMA’s mission of providing disaster assistance. Moreover, they are not all mutually exclusive and thus some codes could be consolidated or eliminated. For example, coastal storms (C), hurricanes (H), and typhoons (J) might all be seen as describing similar events and therefore could be seen as candidates for consolidation. FEMA officials identified several ways in which FEMA takes past experience into account and uses historical data to inform its cost estimation processes for any given disaster. For example, Individual Assistance officials told us that they use demographic data (such as population size and average household income) and a national average of program costs to predict average costs for expected applicants. Furthermore, based on past experience, Individual Assistance officials adjust cost estimates for different points in time during the 60-day registration period. Individuals with greater need tend to apply within the first 30 days of the registration period, according to Individual Assistance officials. This is usually followed by a lull in registrations, then an increase in registrations prior to the close of the registration period. The Public Assistance program has compiled a list of average costs for materials and equipment, which is adjusted for geographic area. As noted earlier, the Public Assistance program also relies heavily on the past experience and historical knowledge of its applicants for the costs of similar projects. Staff within FEMA’s Office of the Chief Financial Officer also contribute to FEMA’s learning from past disasters. For example, in collecting and compiling estimates at the joint field office, the disaster comptroller may question certain estimated costs based on his or her past experience with similar disasters. Similarly, once these estimates are reported to the Disaster Relief Fund Oversight Branch, staff there will review the DPR and, based on their knowledge of and experience with past disasters, may question certain estimates and compare them to similar past disasters. Office of the Chief Financial Officer staff also have worked with others throughout FEMA to develop a model to predict costs for category 3 or higher hurricanes prior to and during landfall. Among other types of data, the model uses historical costs from comparable hurricanes to predict costs. Although the model is finished, it has not been fully tested; no category 3 or higher hurricanes have made landfall in the United States since it was developed. FEMA has taken several steps to improve its management of disaster- related resources. In the past few years, FEMA has undertaken efforts to professionalize and expand the responsibilities of its disaster comptroller cadre. For example, FEMA has developed and updated credentialing plans since 2002 in an attempt to ensure that comptrollers are properly trained. The agency has also combined the Disaster Comptroller and Finance/Administration Section Chief into one position to better manage financial activities at the joint field office. The Office of the Chief Financial Officer introduced the DPR—developed by the Disaster Relief Fund Oversight Branch—as a tool for comptrollers to standardize the formulation and reporting of disaster cost projections. At the time of our review, FEMA was converting six disaster comptrollers from temporary to permanent positions. Officials told us that they plan to place two comptrollers in headquarters to assist with operations in the Office of the Chief Financial Officer, and four in regional offices to provide a “CFO presence” and to have experienced comptrollers on hand to assist with disasters. FEMA has also taken steps to better prepare for disasters. According to FEMA officials, the agency is focusing on “leaning forward”—ensuring that it is in a state of readiness prior to, during, and immediately following a disaster. For example, FEMA officials told us that they pre-position supplies in an attempt to get needed supplies out more quickly during and after a disaster. Similarly, FEMA has negotiated and entered into a number of contingency contracts in an attempt to begin work sooner after a disaster occurs and to potentially save money in the future since costs are prenegotiated. According to FEMA officials, each disaster is unique, and because of this, FEMA “starts from scratch” in developing estimates for each disaster. Although each disaster may be unique, we believe that commonalities exist that would allow FEMA to better predict some costs and have identified a number of opportunities to further its learning and management of resources. FEMA officials told us that a number of factors can lead to changes in FEMA’s disaster cost estimates, some of which are beyond its control. For example, the President may amend the disaster declaration to authorize other types of assistance, revise the federal portion of the cost share for Public Assistance, or cover the addition of more counties. Also, hidden damage might be discovered, which would increase cost estimates. Fluctuations in estimates also may occur with events such as the determination of insurance coverage for individuals and public structures or higher-than-estimated bids to complete large projects (Public Assistance). Changes in the state or local government housing assistance strategies can also drive changes in costs. However, that these are beyond FEMA’s control does not mean FEMA has no way to improve its estimates. FEMA could conduct sensitivity analyses to understand the marginal effects of different cost drivers, such as the addition of counties to a declaration, revisions to the cost share, or the determination of insurance coverage, and to provide a range for the uncertainty created by these factors. We recently reported that as a best practice sensitivity analysis should be used in all cost estimates because all estimates have some uncertainty. Using its experiences from prior disasters, FEMA could analyze the underlying causes of changes in estimates. This could help FEMA develop and provide to policymakers an earlier and more realistic range around its point estimate. In addition, there are other areas where FEMA has greater control. FEMA could review the effect its own processes have on fluctuations in its disaster cost estimates and take actions to better mitigate these factors. For example, FEMA officials told us that mission assignments are generally overestimated but these are not corrected until the performing agencies bill FEMA. We previously reported that when FEMA tasks another federal agency with a mission assignment, FEMA records the entire amount up front as an obligation, but does not adjust this amount until it has received the bill from the performing agency, reviewed it, and recorded the expenditure in its accounting system. The performing agency might not bill FEMA until months after it actually performs the work. If upon reviewing supporting reimbursement documentation FEMA officials determine that some amounts are incorrect or unsupported, FEMA may retrieve or “charge back” the moneys from the agencies. In these instances, agencies may also take additional time to gather and provide additional supporting documentation. We made several recommendations aimed at improving FEMA’s mission assignment process and FEMA officials told us that they are reviewing the management of mission assignments. One official posited that overestimates of mission assignments could have caused the overall estimates to take longer than expected to reach the 10 percent band FEMA officials defined as a reasonable predictor of actual costs. If a review of the mission assignment process shows this to be the case, FEMA should take steps—such as working with performing agencies to develop more realistic mission assignment estimates up front and ensuring that these agencies provide FEMA with bills supported by proper documentation in a timely manner—to improve this process and lessen its effect on the overall estimates. If, however, the overestimation of mission assignments is not driving these changes, FEMA should focus on identifying what is and take appropriate actions to mitigate it. Another area that could warrant review is the determination of eligible costs for Public Assistance. For example, after Public Assistance projects are completed, FEMA sometimes adjusts costs during reconciliation to disallow ineligible costs or determine that other costs are eligible. Focusing on this issue earlier in the process might lead to a more accurate determination of costs eligible for reimbursement and so improve projections. FEMA could also expand its efforts to better consider past experience in developing estimates for new disasters. For example, in tracking incident types, FEMA could improve both the accuracy and the usefulness of the data for its analytic and predictive purposes. A review and revision of incident type codes to reflect the cause(s) of damage would tie the data and coding to their purposes. This could permit making comparisons among similar disasters to better inform and enhance both cost estimates and decision making. Also, FEMA could ensure that for past declarations in the DFSR database, as well as for future declarations, incident codes match the related descriptions and are consistently entered. This effort could be aided by revising the DFSR database to allow for multiple incident types for each declaration to better reflect what occurred. Other opportunities may also exist for the assistance programs. For example, in predicting costs for the Individual Assistance program, the usefulness of a national average should be examined. The substitution or addition of more geographically specific indicators might better predict applicant costs. In some ways, FEMA recognizes the value of using past experience to inform current estimates. For example, it draws upon the experience of its disaster comptrollers and staff in the Disaster Relief Fund Oversight Branch to question estimated costs. In addition, the aforementioned model to predict hurricane costs shows that FEMA recognizes that similar disasters may lead to similar costs, which can be analyzed and applied to better predict costs. According to FEMA officials, they are considering expanding the model to predict costs from other potentially catastrophic disasters, such as earthquakes. In the same vein, we believe that FEMA could expand upon this effort to better predict costs for other types of disasters, particularly those that are noncatastrophic and recur more frequently. FEMA’s opportunities to learn from past experience, especially from its disaster cost data, could be hampered by some costs that are no longer distributed to individual disaster declarations. FEMA officials told us that they use a “surge account” to support federal mobilization, deployment, and preliminary damage assessment activities prior to a disaster declaration. FEMA records subsequent costs by declaration. In the past these surge account costs were distributed on a proportional basis to each disaster declared in the year—so the data for the 83 disaster declarations we were able to review do include these costs. However, FEMA no longer does this. FEMA officials told us that they determined that there was no obvious benefit to distributing surge account costs to subsequent declarations, especially in potential hurricane events that might result in multiple declarations. We note that costs in the surge account have increased significantly in recent years. For fiscal years 2000 through 2003, annual obligations in the surge account were less than $20 million each year; after 2004 they increased to over $100 million each year, according to FEMA data as of June 30, 2007. In fact by that date surge account costs for fiscal year 2007—three-quarters through the fiscal year—had already reached $350 million. No longer distributing these costs to disasters poses an analytical challenge for FEMA’s learning as costs for current and future disasters are not comparable to those that occurred in the past. To improve data reliability, FEMA could also develop standard operating procedures and training for staff entering and maintaining disaster estimate data in the DFSR database. In a recent review of FEMA’s day-to- day operations we found that it does not have a coordinated or strategic approach to training and development programs. Further, FEMA officials described succession planning as nonexistent and several cited it as the agency’s weakest link. We have previously reported that succession planning—a process by which organizations identify, develop, and select their people to ensure an ongoing supply of successors who are the right people, with the right skills, at the right time for leadership and other key positions—is especially important for organizations that are undergoing change. Like the rest of the government, FEMA faces the possibility of losing a significant percentage of staff—especially at the managerial and leadership levels—to retirement. About a third of FEMA’s Senior Executive Service and GS-15 leaders were eligible to retire in fiscal year 2005, and Office of Personnel Management data project that this percentage will increase to over half by the end of fiscal year 2010. Since FEMA relies heavily on the experience of its staff, such a loss could significantly affect its operations. Furthermore, according to FEMA officials with whom we met, there are no standard operating procedures or training courses for staff who are involved in entering and maintaining disaster cost estimate data in the DFSR database that would help mitigate this loss of knowledge and ensure consistency among staff in regional offices and in headquarters. Standard operating procedures also might reduce the coding errors described earlier. FEMA may be able to improve its management of disaster-related resources by reviewing the reasons why “older” disaster declarations remain open and take action to close and reconcile them if possible. By finalizing decisions about how much funding is actually needed to complete work for these open declarations, FEMA will be better able to target its remaining resources. FEMA officials told us that it takes 4 to 5 years to obligate all funding related to an average disaster declaration but we found the average life cycle to be longer—a majority of the noncatastrophic natural disasters declared from 2000 through 2002 (5 to 7 years old) are still open (see table 2). We previously reported that in November 1997, FEMA’s Director chartered three teams of Office of Financial Management staff—referred to as closeout teams—to assist FEMA regional staff and state emergency management personnel in closing out funding activities for all past disasters. Their primary goal was to eliminate remaining costs for these disasters by obligating or recovering funds. We found that these teams were effective in doing so. According to FEMA officials, the closeout teams no longer formally exist because they had successfully closed out funding activities for past disasters. FEMA now relies on regional offices to perform this function, and several use teams similar to the closeout teams to undertake this work. Given its mission, FEMA tends to focus much of its resources on disaster response and recovery. For example, as we previously reported, all FEMA employees are expected to be on call during disaster response and no FEMA personnel are exclusively assigned to its day-to-day operations. Indeed, FEMA officials have said that what FEMA staff label “nondisaster” programs are maintained on an ad hoc basis when permanent staff are deployed, and the agency does not have provisions for continuing programs when program managers are called to response duties. Without an understanding of who holds a mission-critical position for day-to-day operations and what minimum level of staffing is necessary even during disaster response, business continuity and support for the disaster-relief mission are put at increased risk. FEMA staff’s strong sense of mission is no substitute for a plan and strategies of action. It is likely, therefore, that the tasks necessary to close disasters become subordinated to responding to new disasters. This contributes to a situation in which disaster declarations remain open for a number of years. However, closing and reconciling declarations is not merely a bookkeeping exercise. Given the multiple claims on federal resources, it is important to provide decision makers with the best information possible about current and pending claims on those resources. FEMA’s annual budget requests and appropriations for disaster relief are understated because they exclude certain costs. Currently, annual budget estimates are based on a 5-year historical average of obligations, excluding costs associated with catastrophic disaster declarations (i.e., those greater than $500 million). This average—which serves as a proxy for an estimate of resources that will be needed for the upcoming year—presumes to capture all projected costs expected not only from future disasters but also those previously declared. However, as demonstrated by FEMA’s receipt of supplemental appropriations in years when no catastrophic disasters occurred, it does not do so. Excluding certain costs associated with previously declared catastrophic disasters results in an underestimation of annual disaster relief costs for two reasons. First, because FEMA finances disaster relief activities from only one account— regardless of the severity of the disaster—the 5-year average as currently calculated is not sufficient to cover known costs from past catastrophic disasters. Second, from fiscal years 2000 through 2006, catastrophic disasters occurred in 4 out of 7 years, raising questions about the relative infrequency of such events. Excluding costs from catastrophic disasters in annual funding estimates prevents decision makers from receiving a comprehensive view of overall funding claims and trade-offs. This is particularly important given the tight resource constraints facing our nation. Therefore, annual budget requests for disaster relief may be improved by including known costs from previous disasters and some costs associated with catastrophic disasters. Funding for natural disasters is not the only area where a reexamination of the distribution between funding through regular appropriations and funding through supplemental appropriations might be in order. In our work on funding the Global War on Terrorism (GWOT), we also noted that the line between what is funded through regular, annual appropriations and supplemental appropriations has become blurred. The Department of Defense’s GWOT funding guidance has resulted in billions of dollars being added for what DOD calls the “longer war against terror,” making it difficult to distinguish between base costs and the incremental costs to support specific contingency operations. Given FEMA’s mission to lead the nation in mitigating, responding to, and recovering from major domestic disasters, many individuals as well as state and local governments rely on the disaster assistance it provides. The cost estimates FEMA develops in response to a disaster have an effect not only on the assistance provided to those affected by the disaster but also on federal decision makers, as supplemental appropriations will likely be needed. As such, it is imperative for FEMA to develop accurate cost estimates in a timely manner to inform decision making, enhance trade-off decisions, and increase the transparency of these federal commitments. We were able to identify ways in which FEMA has learned from past disasters; however a number of opportunities exist for FEMA to continue this learning and to improve its cost estimation process. For example, FEMA could better ensure that incident codes are useful and accurate. In addition, a number of factors can lead to revisions in its estimates but FEMA can mitigate these factors by conducting sensitivity analyses and reviewing its estimation processes to identify where improvements could be made. To further facilitate learning, FEMA needs to better ensure that it has timely and accurate data from past disasters and this report suggests several ways in which FEMA could do so. FEMA can also explore refining its learning, for example, by using geographically specific averages to complement the national averages it uses. In addition, to facilitate analysis by making current disaster cost data comparable to past disaster data, FEMA could resume distribution of surge account costs to disasters, as appropriate. FEMA has also taken steps to improve its management of disaster-related resources, such as “leaning forward,” professionalizing and expanding the responsibilities of its disaster comptroller cadre, and developing a model to predict costs for category 3 or higher hurricanes prior to and during landfall. However, additional steps would further improve how FEMA manages its resources. For example, to improve data reliability FEMA could develop standard operating procedures and training for staff entering and maintaining disaster estimate data in the DFSR database. Also, although FEMA officials told us that it takes 4 to 5 years to finish all work related to an average disaster, our analysis of FEMA’s data shows that a majority of disasters declared from 2000 through 2002 were still open—that is they had work ongoing—during our review. In the past FEMA formed teams to review these “older” disasters, which resulted in the elimination of remaining costs for these disasters by obligating or recovering funds. A similar effort today could have the same effect. Also, FEMA relies on supplemental appropriations both to cover the costs of providing assistance for new disasters and known costs from past disasters. To promote transparency in the budget process and to better inform decision making, annual budget requests for disaster relief should cover these known costs, including some from catastrophic disasters. To better mitigate the effect of factors both beyond and within FEMA’s control to improve the information provided to decision makers; to better inform future estimates, including the ability to incorporate past experience in those estimates; and to improve the management of FEMA’s disaster-related resources, the Secretary of Homeland Security should instruct FEMA’s Administrator to take the following nine actions: Conduct sensitivity analyses to determine the marginal effects of key cost drivers to provide a range for the uncertainty created by factors beyond FEMA’s control. Review the effect FEMA’s own processes have on fluctuations in disaster cost estimates and take steps to limit the impact they have on estimates. Review the reasons why it takes 6 months or more for estimates to reasonably predict actual costs and focus on improving them to shorten the time frame. Undertake efforts—similar to those FEMA used to develop its model to predict hurricane costs—to better predict costs for other types of disasters, informed by historical costs and other data. Evaluate the benefits of using geographically specific averages in addition to national averages to better project Individual Assistance costs. Resume the distribution of surge account costs to individual disasters, as appropriate, to make cost data from past, current, and future disasters comparable. Review and revise incident coding types to ensure that they are accurate and useful for learning from past experience. At a minimum, incident codes should match the descriptions and be consistently entered and reflect what occurred, which may require permitting multiple incident types for each declaration. Develop training and standard operating procedures for all staff entering incident type and cost information into the DFSR database. Review reasons why “older” disasters remain open and take action to close/reconcile them if possible. To promote a more informed debate about budget priorities and trade-offs, the Secretary of Homeland Security also should instruct FEMA’s Administrator to work with OMB and Congress to provide more complete information on known costs from prior disasters and costs associated with catastrophic disasters as part of the annual budget request. We requested comments on a draft of this report from the Secretary of Homeland Security. In its comments, DHS generally agreed with eight of our ten recommendations. It stated it would take our recommendation to conduct sensitivity analyses to determine the marginal effects of key cost drivers under advisement and did not comment on our recommendation that it work with OMB and Congress to provide more complete information as a part of its annual budget requests. FEMA also provided technical comments, which we have incorporated as appropriate. We are sending copies of this report to the Director of OMB, the Secretary of Homeland Security, the Administrator of FEMA, and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report please contact me at (202) 512-9142 or irvings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who contributed to this report are acknowledged in appendix I. In addition to the individual listed above, Carol Henn, Assistant Director; Benjamin T. Licht; and Kisha Clark made significant contributions to this report. Pedro Briones, John Brooks, Stanley Czerwinski, Peter Del Toro, Carlos Diz, Gabrielle Fagan, Chelsa Gurkin, Elizabeth Hosler, William Jenkins, Casey Keplinger, Tracey King, Latesha Love, James McTigue, Jr., Tiffany Mostert, John Vocino, Katherine Hudson Walker, Greg Wilmoth, and Robert Yetvin also made key contributions to this report.
Public Law No. 110-28 directed GAO to review how the Federal Emergency Management Agency (FEMA) develops its disaster cost estimates. Accordingly, GAO addressed the following questions: (1) What is FEMA's process for developing and refining its cost estimates for any given disaster? (2) From 2000 through 2006, how close have cost estimates been to the actual costs for noncatastrophic (i.e., federal costs under $500 million) natural disasters? (3) What steps has FEMA taken to learn from past experience and improve its management of disaster-related resources and what other opportunities exist? To accomplish this, GAO reviewed relevant FEMA documents and interviewed key officials. GAO also obtained and analyzed disaster cost data and determined that they were sufficiently reliable for the purposes of this review. After a disaster is declared, FEMA staff deployed to a joint field office work with state and local government officials and other relevant parties to develop and refine cost estimates. The overall estimate comprises individual estimates for FEMA's assistance programs plus any related tasks assigned to other federal agencies (mission assignments) and FEMA administrative costs. The methods used to develop these estimates differ depending on program requirements including, in some cases, historical knowledge. FEMA officials told GAO that cost estimates are updated on a continuing basis. Decision makers need accurate information to make informed choices and learn from past experience. FEMA officials stated that by 3 months after a declaration estimates are usually within 10 percent of actual costs--which they defined as reasonable. GAO's analysis showed that decision makers did not have cost information within this 10 percent band until 6 months after the disaster declaration. These results cannot be generalized since this comparison could only be made for the 83 (24 percent) noncatastrophic natural disaster declarations for which final financial decisions had been made. Disaster coding issues also hamper FEMA's ability to learn from past experience. For example, in several instances the code for the incident type and the description of the disaster declaration did not match. Officials described several ways in which FEMA has learned from past disasters and improved its management of disaster-related resources. For example, FEMA uses a national average to predict costs for expected applicants for Individual Assistance. FEMA has also taken several actions to professionalize and expand the responsibilities of its disaster comptrollers. Nonetheless, FEMA could further learn from past experience by conducting sensitivity analyses to identify the marginal effect various factors have on causing fluctuations in its estimates. FEMA could improve its management of disaster-related resources by developing standard procedures for staff involved in entering and updating cost estimate data in its database.
Concerns about the procurement of currency paper resulted in Congress including in the 1997 Emergency Supplemental Appropriations Act a requirement that we complete a comprehensive analysis of the “optimum circumstances for government procurement of distinctive currency paper” and report our findings to the House and Senate Committees on Appropriations. In the conference report accompanying the appropriations bill, the Conference Committee expressed concern over the fact that the Bureau of Engraving and Printing (BEP) of the Department of the Treasury has bought virtually all of its paper for the nation’s currency from a single supplier for over 100 years. The Conference Committee directed that we report on any limitations on competition in currency paper procurement and possible alternatives to the way BEP has been buying the paper, the fairness and reasonableness of prices paid for the paper, the potential for disruption of the availability of currency paper from BEP’s reliance on a single supplier, and other matters. In June 1997, the Chairman of the House Government Reform and Oversight Committee asked that we also report our findings to that Committee because of its interest in federal procurement matters, and Senator Lautenberg requested that we report our findings to his office as well. In September 1997, 16 members of Congress informed us of their interest in our analysis and expressed their opinion that a review of the potential benefits and drawbacks of a single-supplier relationship would be appropriate. The overall objective of our review as stated in the 1997 Emergency Supplemental Appropriations Act was to analyze the “optimum circumstances for government procurement of distinctive currency paper.” However, because that objective was broad and numerous congressional parties were interested in this review, we met with the interested Members’ and committees’ offices to determine the specific issues they wanted addressed as well as approaches to address those issues. Although we identified a number of concerns and issues, they are all covered under the following three objectives: • Have BEP’s efforts to encourage competition for procuring currency paper been effective? • Have prices paid for currency paper been fair and reasonable and has the quality of the paper been ensured? Is there potential for disruption to the U.S. currency paper supply from BEP’s reliance on a single supplier? To address these objectives, we reviewed federal procurement statutes and regulations and specific laws related to currency paper. We reviewed various indicators of the competitiveness of the currency paper market, such as the number of paper manufacturers who said they were capable of supplying currency paper to BEP, and the factors that make it difficult for them to provide currency paper. We also reviewed BEP studies of the currency paper market and obtained information from other federal agencies, such as the Secret Service and the Department of Defense (DOD). To address the first two objectives, we reviewed BEP’s currency paper procurement files for paper contracts in effect from 1988 to 1997, analyzed how BEP bought currency paper during this period, and compared certain BEP actions with requirements in the FAR and applicable laws. We surveyed 30 domestic and foreign cotton paper manufacturers on their interests in supplying currency paper and factors that prevented them from competing for BEP currency paper contracts, and we surveyed other G-7 nations on how they procured banknote paper. We interviewed numerous officials of BEP, Treasury, the Secret Service, the Federal Reserve, Crane, and other agencies. We also interviewed several of the domestic and foreign cotton paper manufacturers that were included in our survey. To help us analyze the fairness and reasonableness of prices paid by BEP for currency paper, we analyzed how BEP used audits of the single supplier’s costs and proposals in its negotiations and evaluated whether BEP had an appropriate basis for determining the fairness and reasonableness of prices it paid for currency paper over the last 10 years. We toured paper mills of two cotton paper manufacturers, as well as BEP printing facilities in Washington, D.C., and Ft. Worth, TX, to observe how paper was produced and currency was printed. To address the third objective, we interviewed officials at BEP, the Federal Reserve, and Crane; reviewed BEP’s contingency plan for critical materials; and asked other G-7 nations what type of contingency reserves of banknote paper they maintained. We did our work in accordance with generally accepted government auditing standards from July 1997 to August 1998. We requested comments on a draft of this report from the Chairman, Board of Governors of the Federal Reserve System; the Secretary of the Treasury; and the Chief Executive Officer of Crane. We received written comments from BEP’s Acting Director that incorporated comments from the Treasury Department, written comments from Crane, and oral comments from the Federal Reserve. BEP’s and Crane’s comments are reprinted in appendixes VI and VII, respectively. Our summary of agency and Crane’s comments and GAO’s responses are discussed at the end of chapter 5. BEP and Crane also provided technical comments, which have been incorporated as appropriate in the report. A more detailed discussion of our objectives, scope, and methodology is contained in appendix I. The optimum circumstances for the procurement of distinctive currency paper would include an active, competitive market for such paper where a number of responsible sources would compete for BEP’s requirements. However, this is currently not the case because of the unique market for currency paper and some statutory restrictions. After over 100 years of relying on a single source, Treasury and BEP completed studies in 1983 and 1996 on what it would take to encourage competition for procuring currency paper, and BEP recently took steps to encourage competition in matters under its control. However, several paper manufacturers told us that they would compete for BEP paper contracts if additional changes were made, such as allowing foreign-owned companies to compete to supply currency paper and extending the length of contracts to more than 4 years. These changes would require existing statutory limitations to be amended. There are also other options for obtaining competition that are allowed under procurement laws and have been used by other federal agencies. When the government purchases common commercially available goods and services, obtaining competition is relatively easy. However, when the government purchases goods that serve only the government’s needs, competition is less likely to occur. In currency paper procurements, obtaining competition is challenging, partly because there are few cotton paper manufacturers, currency paper is unique to the governments’ needs, and a large investment in capital equipment is required. Factors that inhibited competition were identified in the 1996 Treasury/BEP currency paper study. These factors include (1) the cost of the initial capital investment to build or retrofit a plant to produce currency paper; (2) the short start-up period required to comply with specified paper deliveries; (3) the risks and uncertainties inherent in entering a limited, government-controlled market; and (4) the restriction on acquiring distinctive currency paper from foreign-owned or controlled companies contained in the Conte Amendment. Potential suppliers told BEP that it would take between $20 million and $150 million to build or retrofit the necessary plant and equipment to provide currency paper to BEP; and that because of the risks inherent in entering a limited, government-controlled market, some form of financial assistance from BEP would be necessary. The 1996 study also cited delivery requirements, usually requiring delivery starting at or shortly after contract award, as a significant inhibitor, given that manufacturers said it takes 1 to 2 years to become operational. The 1996 Treasury/BEP study also found that the absence of a guaranteed minimum production commitment sufficient to cover the cost of constructing and equipping a plant was an inhibitor. Potential suppliers told Treasury/BEP they would require a long-term commitment to manufacture a minimum of 40 percent of BEP’s requirements in order to begin production. According to BEP, Treasury currently has a study under way aimed at projecting the future demand for currency. The study is being done by representatives from Treasury, BEP, the Mint, and the Federal Reserve. The study is expected to be done by November 30, 1998. BEP awarded five separate contracts to Crane for currency paper from 1988 to 1997. Two of these contracts, 95-23 and 97-10, were awarded to Crane on a sole-source basis. The other three contracts, 88-205, 91-18, and 93-14, were also awarded to Crane because BEP did not receive any other offers in response to its solicitations. Additionally, in accordance with the Conte Amendment, BEP was precluded from awarding a currency paper contract to foreign-owned or controlled firms. Although some matters affecting competition in the currency paper market are beyond BEP’s control, BEP’s solicitations for currency paper before 1997 did not attempt to encourage competition by using means within its control. As shown in table 2.1, some of BEP’s solicitations contained a 1 or 2 year production period and required potential suppliers to start providing currency paper shortly after contract award. Although offerors can always request that financial assistance be provided, BEP did not offer to provide potential offerors financial assistance for capital equipment in its solicitations. In 1997, BEP made significant changes to its solicitation for currency paper. Solicitation 97-13, issued in May 1997, provided for up to a 4-year contract with multiple award scenarios that allowed competitors to submit an offer on various-sized lots and it also provided up to 24 months for a start-up period, under certain award scenarios. Because of BEP’s concerns about violating the 4-year limit on contracts for manufacturing currency paper, the solicitation provided that any required start-up period would be deducted from the 4-year production period. In addition, solicitation 97-13 also provided that BEP will consider “innovative acquisition and financing arrangements” proposed by offerors. Although BEP has taken actions to encourage competition in solicitation 97-13, such as providing for a longer contract performance period than in past solicitations and allowing a 24-month start-up time, some paper manufacturers responding to our survey told us there were other matters that prevented them from competing for the currency paper contract. Some manufacturers said they need an even longer guaranteed contract period, or financial assistance provided by the government, to recover the capital investment required to purchase the equipment to produce the paper. Several paper manufacturers also would like to be able to enter into joint ventures with a foreign paper manufacturer to produce currency paper, but they are unable to do so because of the Conte Amendment, as interpreted by Treasury. Twelve of the 20 paper manufacturers responding to our survey of 30 worldwide firms said that they would be interested in supplying currency paper to BEP and are capable now, or would be in the near future, of supplying at least part of BEP’s currency paper needs, but several matters prevent them from competing. Some of these matters are the same as those identified in BEP’s 1996 currency paper study. None of the 12 interested paper manufacturers said that the size of the currency paper market would make it difficult for them to compete. Table 2.2 summarizes the factors inhibiting competition reported by the 12 paper manufacturers that said they would be interested in supplying currency paper to BEP. Nine of the 12 interested manufacturers said the performance period in BEP’s currency paper contracts has been too short to recover the necessary capital investment. One paper manufacturer said that it is not possible to recover start-up costs in less than 5 years. A second domestic paper manufacturer told us the major reason it did not submit a proposal was that the contract period was too short to recover capital investment, and it believed it was unlikely that this situation could be improved. As a result, this paper manufacturer decided that continuing its investment in product development was too risky and decided not to submit a proposal. According to another domestic paper manufacturer, the amount of capital investment necessary to meet BEP’s requirements cannot be recovered through the price of paper sold to BEP over a 2- to 4-year contract. This paper manufacturer told us that if BEP extends the length of the currency paper contract to at least 10 years it would consider submitting an offer. BEP’s currency paper contracts have generally been for 1 to 2 years with three 1-year options, with the exception of the current solicitation 97-13, which has a performance period of up to 4 years. By law, the contract term to purchase U.S. currency paper cannot exceed 4 years. Additionally, U.S. money order paper contracts are for 5 years; and U.S. passport paper contracts are for 3 base years, with two 1-year options. Both passport and money order paper have security features (i.e., watermarks and security thread) similar to those of currency paper. “None of the funds made available by this or any other Act with respect to any fiscal year may be used to make a contract for manufacture of distinctive paper for United States currency and securities pursuant to section 5114 of title 31, U.S.C., with any corporation or other entity owned or controlled by persons not citizens of the United States, or for the manufacture of such distinctive paper outside the United States or its possessions. This subsection shall not apply if the Secretary of the Treasury determines that no domestic manufacturer of distinctive paper for United States currency or securities exists with which to make a contract and if the Secretary of the Treasury publishes in the Federal Register a written finding stating the basis for the determination.” Although the Conte Amendment itself does not specify ownership and control requirements, the accompanying Conference Report states that BEP may not enter into such a contract with an entity if 10 percent or more of the entity is owned or controlled by a group of foreign persons. In 1995, the report of the House Appropriations Committee that accompanied the Treasury, Postal Service, and General Government Appropriations Act for fiscal year 1996 attempted to redefine the intended meaning of the Conte Amendment. The report stated that a domestic corporation or other entity is one “created under the laws of the United States or any one of its states or possessions, and . . . more than 50 percent of is held by United States citizen(s).” Treasury’s Office of General Counsel concluded in a March 1997 legal opinion that the 1995 House Appropriations Committee Report language cannot modify the constraints established in the Conte Amendment and the contemporaneous explanation of the provision in the 1987 conference report. As part of our review, the House Appropriations Subcommittee on Treasury, Postal Service, and General Government asked us to review Treasury’s position that the Conte Amendment precludes BEP from entering into a contract for the manufacture of distinctive currency paper with an entity of which 10 percent or more is owned or controlled by a foreign company. Because the language designating a 10-percent limitation on foreign ownership or control is in the 1987 conference report and is not specified in the statute itself, Treasury’s interpretation is not mandated by the statute. Nevertheless, in the absence of language in the statute defining what constitutes foreign ownership and control, it is reasonable for Treasury to rely on the 1987 conference report as guidance for interpreting and applying the statutory language. Thus, we believe that Treasury’s interpretation of the restriction in the Conte Amendment is within its discretion. Six of the 12 paper manufacturers we surveyed that were interested in supplying currency paper stated that their decisions not to respond to BEP solicitations had been influenced by the Conte Amendment restriction on foreign ownership. Three of the five foreign paper manufacturers saw this as an issue. According to one domestic manufacturer, the need to have 90-percent U.S. ownership limits a foreign entity from participating in a fashion that gives it any kind of financial incentive. One foreign manufacturer commented that as a foreign paper company, it would want a larger participation than the 10 percent currently allowed. Similarly, two domestic paper manufacturers commented to BEP in 1995 that the restriction on foreign ownership limited their capability to gain access to the only source of currency paper manufacturing expertise, particularly for security threads and watermarks, outside of Crane. Of the 12 interested firms responding to our survey, only 3 foreign firms said they can currently produce all 3 types of currency paper. According to BEP, there are four major currency paper manufacturers that are internationally recognized in currency paper manufacturing and security. Only one of the four, Crane, is located in the United States. The other three, Portals Ltd., Papierfabrik Louisenthal, and Arjo-Wiggins, are located overseas. Portals Ltd., located in the United Kingdom, said it has over 300 years of experience in supplying currency paper to the British government and 40 other countries. Papierfabrik Louisenthal, located in Germany, has supplied banknote paper to Germany since 1967. Arjo-Wiggins, located in France, has supplied banknote paper to France since 1789. American companies that we surveyed said that under the 90 percent U.S.-owned or controlled interpretation, they have difficulty attracting the interest of foreign companies in a joint venture in which their expertise in currency production could be shared. Similarly, Portals, a foreign-owned paper manufacturer, told us it built a paper manufacturing facility in Hawkinsville, GA, in 1980 for two market segments: U.S. currency paper and high-security documents. According to Portals officials, they had been visiting BEP for a number of years regarding their interest in providing upgraded security features for U.S. currency paper. Portals officials stated that on the basis of the favorable reception from BEP, Portals built the Hawkinsville mill, which was capable of producing 2,500 tons of paper a year immediately and had the potential to move quickly up to 10,000 tons of paper a year. Ultimately, it was the passage of the Conte Amendment that caused Portals to sell the Hawkinsville mill in 1988, according to Portals officials. Reliance on a single domestic supplier for currency paper is not unique to the United States. In our survey of the other G-7 countries, Germany, France, and Italy said they restrict their purchases of banknote paper to suppliers located in their countries. England and Canada said that they do not restrict their purchases of banknote paper only to suppliers located in their countries. However, England has historically always purchased its paper from Portals; and Canada competes both the manufacture of banknote paper and the printing of the notes. In Japan, the Japanese government is responsible for producing the paper and printing banknotes. Secret Service officials strongly oppose any production of U.S. currency paper outside the United States because the Secret Service does not have authority to exercise security oversight of the personnel or plant facility in a foreign country. The Secret Service further stated that although it may be able to make agreements allowing for such oversight, a foreign country’s law could preclude any investigative action or oversight by United States law enforcement personnel. The Secret Service also pointed out that the logistics of moving a critical material across borders via a variety of transportation modes would pose additional security risks. BEP security officials told us that they share the same concerns. Secret Service officials pointed out that they did not believe that the percentage of foreign ownership would pose a security problem as long as the paper is produced on U.S. soil. Officials from both the Secret Service and BEP’s Office of Security stated that because of their concern about a catastrophe, they would be in favor of having more than one supplier of currency paper, but they would strongly prefer that suppliers be located in the United States. We agree that the Secret Service and BEP security officials have valid concerns about the manufacture of U.S. currency paper outside the United States. However, there are other components used for currency production in the United States that are supplied by foreign companies. For example, BEP prints U.S. currency on Swiss-designed sheet-fed Intaglio printing presses made by De La Rue Giori; and it buys the sheet currency inspection system and interim currency inspection system from Giesecke & Devrient, located in Germany. Currency ink is bought from Sicpa, a Swiss-owned company, which has facilities in the United States. Additionally, the Federal Reserve’s high speed currency processing machines are made in Germany. Six of the 12 interested paper manufacturers we surveyed said that given the short length of a currency paper contract, the high cost to finance capital equipment inhibits their ability to compete. Financing arrangements to assist these manufacturers could involve extraordinary measures, such as the government sharing in the cost of obtaining the capital equipment needed to build a currency plant by providing government-furnished property or by financing contractor acquired property (CAP). According to one domestic manufacturer, a new supplier would incur significant capital expenditures in order to meet BEP’s requirements, and the use of CAP would be essential to mitigate the capital investment needed. According to the 1996 Treasury/BEP study, the estimated capital investment needed to produce currency paper averaged $40 million and ranged from a low of $20 million to a high of $150 million, depending on whether an existing mill could be retrofitted or a new mill needed to be built. Moreover, the Treasury/BEP 1996 currency paper study also suggested that government financing of potential contractors’ equipment might be necessary to secure competition for currency paper. In July 1996, BEP posted a draft of solicitation 97-13 on the Internet stating that BEP would consider the feasibility of providing CAP. However, in May 1997, Treasury’s former Director of Procurement decided to remove CAP from the solicitation because, in his view, in the form that it was being proposed by BEP, CAP would not have increased competition. The Treasury Procurement Director concluded that a 4-year contract, inclusive of start-up time, would not allow enough actual production time to generate sufficient revenues for the contractors to make it worthwhile for them to risk the substantial investment required to compete, even using CAP. Treasury recommended that BEP revise solicitation 97-13 to state that offerors would be free to propose “innovative acquisition and financing arrangements.” One interested paper manufacturer told us that the removal of CAP from the final solicitation and replacement with language that said BEP would consider “innovative acquisition and financing arrangements” left too much uncertainty about the capital investment issue for the manufacturer to proceed with a proposal. Two other interested paper manufacturers said they would still be interested in competing for the contract without CAP, if the length of the contract were to be extended to at least 5 years. The start-up period historically allowed by BEP is too short, according to five of the paper manufacturers we surveyed who were interested in competing for BEP currency paper contracts. One domestic paper manufacturer said that a short start-up period permits only the incumbent to submit an offer. This paper manufacturer further stated that the 24-month start-up period allowed by BEP in solicitation 97-13 would result in the forfeiture of 2-1/2 years of manufacturing, thus providing for less than the 4-year production period. The paper manufacturer further believes that the start-up period should be added to the manufacturing contract, i.e., a 2-1/2 year start-up period followed by a 4-year contract; otherwise, no paper company would invest in the specialized plant and equipment that are required to meet the government’s security paper production needs before the contract is awarded. Another domestic supplier said that to prepare a facility for currency paper manufacturing would take 1 to 2 years. As noted previously, current law as interpreted by BEP restricts the total period for currency paper production contracts to 4 years. Prior to 1997, BEP required the supplier of currency paper to provide currency paper immediately or shortly after contract award. The start-up period for all distinctive currency paper in solicitation 97-13 is up to 24 months. The requirements to pay a royalty license to use the data and process for insertion of the security thread, the security requirements for the manufacturing process, and the technology required to incorporate anticounterfeiting features in paper were also cited as factors inhibiting competition by 3 of the 12 paper manufacturers that were interested. One paper manufacturer filed a protest with BEP over the security thread license and other issues relating to the current solicitation. Specifically, the paper manufacturer stated that the solicitation places potential offerors, other than the incumbent, in the position of violating a patent held by Crane if they supply currency paper containing security thread made to the specifications outlined by BEP. According to this manufacturer, potential offerors are effectively precluded from providing distinctive currency paper with security thread and new currency design paper with watermark and security thread. In solicitation 97-13, BEP stated that it would provide the security thread as government-furnished material. However, BEP would have to negotiate with Crane to buy the security thread. BEP’s attempts to develop alternative sources in the 1960s and 1980s were not successful for a variety of reasons, including the following: • One firm was unable to price its product competitively with BEP’s traditional supply source for a portion of the currency paper requirement. • BEP discontinued its use of paper that another firm was developing. • The firm that had been developing paper that was discontinued needed technology from a foreign supplier to help it produce other types of paper that met BEP specifications. In 1996, BEP studied the possibility of developing a second currency paper source. It concluded that two sources would probably be more costly than a single source, but it should continue to explore the marketplace through competitive solicitations to determine if there were viable alternative sources. The costs used in the 1996 study were based on an informal survey of paper producers that asked them how much capital investment would be required to prepare paper for BEP. For production costs, BEP assumed that a second producer would incur the same costs as Crane had. BEP’s analysis showed that a second source, producing about 40 percent of BEP’s needs, would increase costs of producing paper by at least $21 million per year and possibly $37 million per year, depending on the amount of capital equipment the second producer acquired. However, BEP’s analysis did not reflect its subsequent decision to accept a higher profit rate with Crane to compensate for Crane’s investment in capital equipment, such as it did on the two most recent contract actions with Crane. Other agencies have found it advantageous to develop a second source. DOD officials told us that they have used a strategy referred to as dual sourcing to develop a second supplier in a sole-source situation for some weapon systems. For example, according to Air Force officials, dual sourcing was used to develop a second supplier to purchase engines for F-16 fighters. Between 1967 and 1972, with few exceptions, the U.S. Mint has awarded the contracts for clad material to one company. However, the Mint realized the vulnerability of having only one supplier, and it attempted to develop additional sources by awarding developmental contracts to firms that were interested in competing for future clad material contracts. However, the vendors selected were unable to produce the material at an acceptable level of quality, according to Mint officials. Although the Mint did not have much success with developmental contracts, it currently has more than one supplier, according to Mint officials. Since 1993, the Mint has purchased clad material from two vendors that share 50 and 45 percent of the contract, respectively. The other 5 percent of the contract is divided between two developmental contractors. Mint officials told us that they did not have a developmental contract with the Mint’s current second supplier, but the second supplier responded to a competitive solicitation issued in 1993. Mint officials said that having two suppliers is better than one because competition helped prevent prices of clad material from rising as rapidly as they would have if there had not been any competition. Although the long-term relationship between BEP and Crane has historically provided quality currency paper, BEP did not generally demonstrate that it obtained fair and reasonable prices for the contracts, options, and extensions awarded between 1988 and 1997. To the contrary, the evidence available in the contract files showed that BEP sometimes paid what it believed to be too high a price when buying currency paper. BEP’s contracting officers recommended accepting prices that they could not determine were fair and reasonable with respect to five contracting actions because there was no other source for currency paper. BEP would not accept Crane’s proposed prices for half of the 10-year period covered by our review. Instead, BEP agreed to pay Crane’s proposed prices as interim prices. The dispute was eventually settled by an arbitrator. The major disagreement between BEP and Crane involved profits until recently, when BEP increased its negotiated profit objectives. In determining whether Crane’s proposed prices were fair and reasonable, BEP relied primarily on audits of Crane’s proposals. However, BEP has not obtained audits of Crane’s cost estimating system or post-award audits of some contracts until recently, made little use of other cost analysis techniques in the earlier contracts in our sample, and made very little use of price analysis. Further, some of BEP’s procurement practices relating to the quantities of paper ordered and its failure to obtain royalty-free rights to the security thread caused, or may have caused, the government to pay more for currency paper than it should have. According to BEP officials, Crane has been a reliable source for paper. BEP and Crane officials said that a paper delivery to BEP has not been missed in over 100 years. BEP officials also said that the overall quality of the currency paper supplied by Crane has been good. In reviewing the files for currency paper contracts in effect from 1988 to 1997, we found only two references to problems with the paper that Crane supplied to the government during this period. The first problem involved bonding of the security thread to the paper. This problem began in 1991 and was due to a change in the adhesive system that was mandated by an environmental ruling in New Hampshire, where the thread was produced. BEP gave Crane a waiver to the contract until the problem was solved in 1994. According to BEP, the resolution of this problem created a second problem, which involved the inability of Crane to meet BEP’s standards for the folding endurance for the security threads. The folding endurance standard specifies how many times paper can be folded before it tears and is a measure of durability. The problem occurred in 1994, after the thread bonding adhesion was changed, and it was brought to BEP’s attention by Crane. BEP granted Crane a waiver from the folding endurance standards for the contract in effect at the time (95-23) and lowered the standard for the subsequent contract (97-10). Of the five contracts awarded from 1988 to 1997, two were awarded on a sole-source basis to Crane. For the other three contracts, BEP issued competitive solicitations. Crane was the only company to submit an offer. In the absence of competition, BEP had to rely on cost and price analyses to evaluate proposed prices to determine if the prices paid for currency paper were fair and reasonable. Price analysis is to be used to verify that the overall price offered is fair and reasonable in comparison with current or recent prices for the same or similar items. Examples of price analysis include comparing proposed prices with (1) prices obtained for similar items through market research, (2) parametric estimates such as dollars per pound, and (3) previous prices. Under the Truth in Negotiations Act, offerors are required to submit and certify cost or pricing data to support the reasonableness of individual cost elements, under certain circumstances, when adequate price competition does not exist. Separate cost elements and profits are evaluated to determine how well the proposed costs represent what the costs of the contract should actually be, assuming reasonable economy and efficiency. Examples of cost analysis include the comparison of costs proposed for individual cost elements to historical costs and the evaluation of the need for and reasonableness of proposed costs. Contracting officers are to determine whether a proposed price is fair and reasonable on the basis of both a cost analysis to ensure the reasonableness of individual cost elements and a price analysis to ensure that the overall price, including profit, is fair and reasonable. The contracting officer’s determination is a judgment based on the results of the cost and price analyses. For the most part, BEP limited its cost analysis techniques to audits of Crane’s price proposals. The audits were done by the Treasury Inspector General (IG) for contracts awarded before 1992 and by the Defense Contract Audit Agency (DCAA) for contracts awarded after 1992. The audits generally consisted of a review of the proposed costs and a test of the reliability of the underlying data and records supporting the proposed costs, as well as the accounting principles used in developing the proposal. In several of the audits, the auditors qualified their work because of Crane’s cost accounting system. For example, for contract 88-205, option 1, the auditors observed that standard costs used in estimating were not adjusted for actual variances, which made them less reliable as a basis for estimation. BEP generally obtained audits of Crane’s price proposals, but it did not conduct a comprehensive price analysis for the five contracts we reviewed. BEP procurement records showed that it did not analyze the changes in prices from one contract to the next and did not compare proposed contract prices to the prices paid for similar items by other government agencies or countries. It did not always review cost trends, such as product yield rates, material prices, and proposed escalation over time, for the earlier contracts in our sample. More specifically, according to BEP’s procurement records: • Audits of proposed costs were obtained for the first three contracts (88-205, 91-18, and 93-14) included in our review, and this information was used to evaluate Crane’s proposed costs; however, additional cost or price analysis was not done for these contracts. BEP stated that it did not have time to do a cost analysis for contract 91-18. • Audits of proposed costs were also obtained for contract 95-23, and this information was used to evaluate Crane’s proposed costs. BEP said that it performed additional cost and trend analysis. • For contract 97-10, BEP again used audit results and did more thorough cost analysis than it had previously done. BEP also did limited price analysis; for example, it compared proposed prices to prices under its existing contracts. A BEP contracting officer said she attempted to obtain prices paid for currency paper from some other governments by telephoning them, but they said they were not willing to share this information due to its proprietary nature. We recognize that other governments may consider such information to be proprietary or be unwilling to share the information with an agency contracting official over the telephone. However, given the interest of the government in achieving a fair and reasonable price in this unique market, other more official and formal efforts to obtain the information, such as inquiries from the Secretary of the Treasury or the State Department, might be more successful. BEP procurement officials said they did not attempt to compare the prices proposed for currency paper with the amounts the U.S. Postal Service paid for money orders or the Government Printing Office paid for passport paper, because the products were different. Although we agree that the products are not identical, they are similar and they are bought competitively. The passport paper is cotton and pulp based, has security thread, and contains a watermark. Money orders also have security threads and watermarks, but they are made from wood pulp instead of cotton. Nonetheless, although comparisons among these types of products would not in themselves have provided a basis for definite conclusions, they may have provided some insight for assessing cost and price trends over time and in demonstrating the effects of competition on prices. The only other analysis used by BEP was for negotiating contract 97-10, and it included an analysis of the effects of changes in quantities ordered on Crane’s production costs. Similar analyses would have been helpful for the other contracts we reviewed. In accordance with section 9003(b) of the 1997 Emergency Supplemental Appropriations Act, the Secretary of the Treasury was required to certify that the price for contract 97-10 was fair and reasonable and that the terms of the contract were customary, appropriate, and in compliance with procurement regulations. The Secretary delegated this determination to the Director of BEP, who made these certifications on September 3, 1997. The five contracts we reviewed from 1988 to 1997 and their options and extensions resulted in 17 contract actions. The prices for the contracts and options are listed in appendix V. For these 17 actions, BEP determined the price to be fair and reasonable for 4, but it was not able to determine the price to be fair and reasonable for 5. BEP did not reach agreement on price for the remaining eight contract actions. For these eight, BEP used interim prices, which were later finalized and reduced by an arbitrator. For the eight contract actions that were finalized by an arbitrator, BEP and Crane were unable to reach agreement on (1) royalties paid to Crane’s affiliated subcontractor that produced the security thread, (2) allocation of commercial sales commissions to government contracts, (3) allocation of legal and consulting costs, and (4) profit. The arbitrator concluded in January 1995 that there was no common control of the affiliated company producing the thread, the allocation of Crane’s sales commissions to government contracts was not appropriate, and Crane’s allocation of legal and consulting costs was proper. The arbitrator also decided that Crane was entitled to higher profits than BEP had been willing to accept because of Crane’s needs for a fair return on its capital investments made to produce paper with less labor. The arbitrator commented that had the DOD weighted guidelines been used, the government’s and Crane’s positions would have been closer. The DOD guidelines provide a structured approach to develop a contract profit objective, and they emphasize the usefulness of facilities’ capital for buildings and equipment used by the contractor to improve productivity or to provide other benefits, such as improved reliability. The arbitrator decided that the settled price should be $212 million, which was $9.7 million (4.4 percent) lower than the interim payments that BEP had made to Crane. According to the arbitration settlement, billings for the subsequent 5 months of one contract action were also settled. During this period, BEP paid $2.1 million more in interim payments than the settled amount, bringing the total amount returned to BEP to $12.7 million. For 5 of the 17 contract actions, BEP was unable to determine the prices to be fair and reasonable. However, it accepted prices for these five contract actions because, according to the BEP contracting files, (1) there was no other source of paper and (2) the Federal Reserve’s currency requirements could not be met if the contract with Crane were not awarded. The major reason why BEP was not able to determine the prices to be fair and reasonable was that BEP contracting officers questioned the profit proposed by Crane. In general, BEP contracting officers were not willing to accept Crane’s proposed profit levels until after the award of contract 95-23, when BEP modified its profit objective by adopting the DOD weighted guidelines. BEP’s application of these guidelines resulted in BEP’s adoption of a higher profit objective. Crane told us that the use of the DOD guidelines resulted in a fair return on the investment made in capital equipment with a minimum amount of labor costs. BEP’s contract files did not indicate that any analysis was done to demonstrate that on BEP contracts for which the DOD guidelines were used to analyze profits, including contract 97-10, the profits were beneficial to the government in that prices would be reduced, labor costs would be reduced, or the government would otherwise benefit. Although BEP primarily relied on audits of Crane’s proposals to determine if the prices proposed were fair and reasonable, two factors qualified the usefulness of these data. First, in a 1994 post-award audit of a cost proposal, DCAA identified about $3 million in over-pricing attributed to Crane’s accounting system. The auditors observed that Crane’s cost accounting system was based on standard costs that were not periodically adjusted to reflect actual costs. Also, in several audit reports covering the proposals for contracts we reviewed, DCAA reported that it had not been asked to review the contractor’s budgeting/estimating system. A second factor was the lack of post-award audits of the contractor’s costs for contracts 95-23 and 97-10. In post-award audits, DCAA attempts to verify whether the costs proposed were based on accurate, complete, and current data as required by the Truth in Negotiations Act. BEP officials said that they asked for DCAA audits of the contractor’s budgeting/estimating system and post-award audits of contracts 95-23 and 97-10 in May 1998. BEP officials said they did not ask for these audits earlier because the contractor’s staff who would be responsible for working with the DCAA auditors were engaged in preparing cost proposals, and BEP did not want to interfere with these activities. Although unrelated to the issue of whether the government paid fair and reasonable prices for currency paper, which is based on the judgment of the contracting officer on the prices proposed for given quantities of supplies, we also found that certain BEP procurement practices contributed, or could have contributed, to higher than necessary currency paper costs. The practices included ordering inconsistent quantities of paper, understating quantities expected to be ordered, and not obtaining royalty-free data rights for security thread used in U.S. currency. According to a former Crane official we interviewed, BEP did not order consistent amounts of the paper under the contracts. Consequently, Crane was not able to maintain a steady production schedule and had to have more equipment than necessary to produce paper to meet BEP’s inconsistent ordering. This official said there were times when Crane’s paper mill would be operating only a few days a week due to lower-than-usual orders for paper; but at other times, the mill would have to operate at full capacity for weeks in order to fulfill a larger-than-usual BEP order. Similarly, the BEP contracting officer noted in an October 1996 trip report on a visit to Crane that Crane requested BEP to commit to leveling out production orders. The contracting officer reported that Crane had experienced four layoffs that year that were costly and could result in the loss of skilled workers. The five contracts awarded from 1988 to 1997 were either fixed-price requirements contracts or indefinite delivery/indefinite quantity contracts. Under either type of contract, the government provides an estimate of the quantities of paper to be bought, and the contractor proposes a price-per-sheet of paper based on that quantity. Because of the relatively high fixed costs in producing currency paper, primarily due to high equipment costs, a higher volume equates to a lower unit cost as the fixed costs are spread over more units. Because paper contracts are awarded on a price-per-sheet basis, government orders in excess of the estimated quantity would be expected to result in lower per sheet actual costs and increased profit per sheet. For example, under the base period for contract 88-205, the contractor provided a price of $.1254 per sheet, based on an estimated quantity of 360 million sheets. BEP actually bought 435 million sheets under this contract, which we estimate to have contributed about $1.5 million in additional contract profits. Other contracts we reviewed also had differences between the estimated contract quantities and the actual orders. A third issue involves BEP’s failure to obtain royalty-free data rights to the security thread used in currency and the process used to insert the thread. Crane, with its affiliated company, Technical Graphics, Inc., holds patents for the thread and the process used to insert the thread in the currency paper. Although this thread is unique to U.S. currency, a BEP official said that the government does not have the patents or a royalty-free license to use the thread because the government never directly paid for their development. The BEP official said that in the early 1980s, Crane approached BEP with an idea for the thread. BEP encouraged Crane to develop it but did not enter into a research and development contract with Crane to develop the concept. A BEP official observed that a research and development contract would have been the vehicle for the government to obtain an interest in the concept. According to BEP officials, Crane used its own funds to develop the thread and insertion process, so Crane is entitled to the patents. BEP officials also said that the government indirectly paid for much of Crane’s development cost. They said the government cannot obtain the royalty-free data rights unless it contracts to do so. Although there have not been any disruptions in the supply of currency paper for the last 119 years, BEP has not been in a good negotiating position and has been vulnerable because it did not have a second source for currency paper or have a reserve inventory of currency paper. The Conte Amendment allows BEP to contract with a foreign entity if a domestic source is not available, thus providing some relief if the current supplier were to encounter a catastrophic incident and be unable to supply currency paper. However, BEP officials told us that a foreign source would require at least 3 months to prepare, produce, and ship watermark and threaded paper and between 1 and 2 months to deliver currency paper without watermarks. BEP is in the process of establishing a 3-month contingency supply of currency paper, which BEP expects to be completed in 1999. Although the longstanding single supplier has been a reliable source of currency paper, the combination of relying on a single supplier and not having an inventory placed BEP in a weak negotiating position and presented some risks. In BEP’s price negotiation memorandums for contracts 88-205 and 95-23, BEP’s contracting officers stated that despite the fact that they considered Crane’s price to be too high, BEP awarded contracts to Crane at the prices proposed by Crane in order to ensure a continuous supply of currency paper. Furthermore, in a meeting between BEP and Crane to negotiate contract 95-23, the parties could not reach agreement over price. The former Chief Executive Officer of Crane said that “BEP would just have to run out of paper,” according to a memo written by a Treasury official dated June 21, 1995. Although BEP and Crane were eventually able to reach agreement, the former Chief Executive Officer of Crane told us that in June 1995 he told Treasury officials, in effect, that he would not agree to another paper contract and that BEP would have to run out of paper. He said that this statement stemmed from issues surrounding the arbitration settlement. He said that a few months earlier, Crane and BEP signed the agreement to accept the terms of the arbitration settlement; however, BEP was still questioning the prices covered by the agreement. According to the former Crane official, the only leverage Crane had to settle with BEP was to not agree to enter into any new contracts. Under the Conte Amendment, if the Secretary of the Treasury determines that no domestic source of currency paper exists in the United States, the requirement for currency paper to be produced in the United States and the prohibition against the purchase of currency paper from a supplier owned or controlled by a foreign entity would not apply. In order to procure currency paper from a foreign supplier, several actions would need to be taken. First, under the Conte Amendment, a written finding by the Secretary of the Treasury justifying the basis for the determination that no domestic manufacturer of currency paper exists must be published in the Federal Register. According to BEP officials, this could be done within a matter of days. Second, BEP would need to contract with Crane to acquire the security thread so BEP could provide the thread as government furnished property. Finally, it would take the foreign paper manufacturer about 3 months to start providing BEP with the currency paper, according to BEP officials. To its credit, BEP recently decided to replace its “just-in-time” approach to maintaining an inventory with a 3-month contingency supply of currency paper. According to its 1996 strategic contingency plan for critical materials, BEP determined that a 3-month contingency supply of currency paper would be adequate. BEP officials said that they will have the inventory built up by 1999. According to the Federal Reserve, it maintains a 40-day supply of finished currency at each of it reserve banks, which would also provide some additional time to bring on another source of currency paper if this were needed. In our survey of other G-7 nations, we were told that the amount of banknote paper maintained in reserve ranged from 1 month in England and Japan to 2 to 3 months in France and Italy. In Canada, the banknote printers are responsible for procuring their own banknote paper. Germany would not provide information on its inventory. Additionally, like the United States, none of the other G-7 nations maintain a second supplier of banknote paper to protect against possible disruptions in the supply of their banknote paper. BEP appears to be ahead of achieving its goal to have a 3-month reserve for each denomination. According to BEP, as of May 1998, it has a 3-month reserve for each individual denomination with the exception of the $20 denomination, which has been recently redesigned. BEP officials anticipate reaching the 3-month reserve for the newly designed $20 note during calendar year 1999. Obtaining competition in currency paper procurement is challenging, partly because of the uniqueness of the currency paper, which requires a relatively large investment in capital equipment. In addition, special statutory provisions govern the acquisition of currency paper that provide a 4-year limit to contracts for the manufacture of currency paper and that it be manufactured in the United States, and prohibit the purchase of currency paper from foreign-owned or controlled entities. Although most BEP solicitations issued before 1997 were competitive, it was not successful in obtaining competition because no firm other than Crane submitted an offer. BEP efforts in the 1960s and 1980s to establish a second source for currency paper also were not successful, for similar reasons. We recognize that there are some uncertainties to the competitive process, even if the existing problems are solved. For example, 12 paper manufacturers told us that they are capable now, or would be in the near future, of supplying at least part of BEP’s currency paper needs if further changes are made. However, we cannot say with any certainty how many, if any, would submit an offer; whether they would be price-competitive with Crane; or if the quality of paper and reliability of delivery would be maintained. In addition, 5 of the 12 paper manufacturers are foreign-owned and are precluded from receiving a contract award under current law. It is uncertain whether the government can successfully develop a second domestic source for future paper needs, primarily because it is unknown how prices would change. Prices might increase if more than one supplier were used. For example, if the same quantity of paper is obtained from two or more suppliers, each with substantial capital investments, the unit price for paper is likely to be higher from each. Therefore, although having a second supplier could lessen the government’s vulnerability to a disruption in supply, having two suppliers could result in an increased cost to the government. On the other hand, a single supplier has less incentive to be efficient or to keep prices and costs to a minimum than suppliers who have to compete with each other, and DOD has reportedly benefited from having a second source in some instances. In its most recent currency paper solicitation, BEP has taken several actions to encourage competition, including providing up to 24 months for potential suppliers to start production for currency paper with additional security features and providing for longer contract performance periods, within the statutory 4-year limit. However, if these steps are not sufficient to encourage offers from additional suppliers, additional actions to promote competition by Treasury and BEP may be appropriate. Given the current statutory constraints; previous efforts to study this problem, as well as Treasury’s ongoing study of future currency demand, which could affect the economic viability of having more that one currency paper supplier; and uncertainties discussed in our report, we believe it is premature to recommend specific steps at this time. Moreover, additional insight on this issue should be available after Treasury completes its ongoing study on future currency demand and as other information becomes available, such as the currency paper prices BEP obtains under its current solicitation and any changes in legislation affecting currency paper procurement that might occur. BEP has not generally been adequately prepared to be in a position to know what it should be paying for currency paper because, until recently, it has done only limited cost analysis and has not used price analysis. BEP could improve some aspects of its currency paper procurements. The evidence demonstrates that BEP (1) lacked an aggressive effort to encourage Crane to develop an acceptable cost accounting system; (2) did not always arrange for post-award audits and audits of the supplier’s cost estimating system; (3) did not include data and analyses in the procurement record that demonstrated the benefit that BEP was to receive when it approved profits that were to recognize or provide an incentive for capital investment; (4) conducted limited analysis of supplier costs and prices, in the context of the worldwide market for currency paper; (5) failed to accurately estimate the amount of paper it needed to procure and ordered inconsistent amounts of paper; and (6) did not take action to arrange for royalty-free access to security thread. In addition to actions to correct these problems, recent efforts to establish a 3-month inventory of currency paper should provide an additional tool to help BEP better ensure that fair and reasonable prices can be achieved. As noted above, BEP has taken several actions to encourage competition. For example, BEP extended the period for potential suppliers to start production for currency paper with additional security features and provided for longer contract performance periods than it had in the past. However, BEP must acquire currency paper within the existing legal framework. According to BEP, the legal framework requires that offerors’ start-up period be included in the 4-year contract period, thus reducing the manufacturing period and limiting the effect of BEP’s actions. According to BEP, the 4-year statutory limit on contracts was created in 1916 to extend the contracts beyond the 1-year statutory limit then in effect, in order to better ensure a reliable supply of materials. BEP’s options for encouraging competition could be further enhanced if Congress lengthened the 4-year limit for currency paper contracts to give potential offerors a longer time to recover their capital investments. If efforts to obtain competition continue to be unsuccessful, BEP’s capacity to achieve fair and reasonable prices could be enhanced through congressional action. BEP’s strategy options could be further strengthened if Congress provided additional authority by modifying the Conte Amendment’s prohibition on procuring currency paper from foreign-owned or controlled suppliers to permit the Secretary of the Treasury to do so on a temporary basis if it is determined that currency paper is not available from a domestic source at fair and reasonable prices. Such a modification could provide additional leverage for the government in its negotiations with the current supplier, or any future domestic supplier(s), and increase the likelihood that fair and reasonable prices can be achieved. To strengthen BEP’s capacity to ensure fair and reasonable prices, we recommend that the Secretary direct BEP to • ensure that the contractor maintains acceptable cost accounting and estimating systems for future contracts and that they are periodically audited; • arrange for post-award audits of the contractor’s costs; include data and analyses in the currency paper procurement record that demonstrate the benefits the government is to receive when it approves profit levels that are aimed at recognizing or providing an incentive for capital investments; and to the extent possible, make more extensive use of price analysis to determine the fairness and reasonableness of prices, including the collection of data from foreign countries on their currency prices and data on similar supplies purchased by other agencies, such as paper for passports and money orders. To further enhance opportunities for other paper manufacturers to offer to provide currency paper to the government and to obtain offers that represent the best value to the government for the paper, we also recommend that the Secretary ensure that all future currency paper procurements reflect the expected amounts of paper needed and orders against contracts are for consistent amounts. This would allow the supplier(s) to maintain a steady production level and stabilize workforce levels. Finally, we recommend that the Secretary ensure that the government obtains royalty-free data rights to any future security measures incorporated into currency paper. To further assist the Secretary in obtaining competition from domestic sources, Congress may wish to consider lengthening the 4-year limit for currency paper contracts to give potential offerors a longer time to recover their capital investments. If adequate price competition among two or more suppliers can be achieved, concerns over whether the prices paid are fair and reasonable should be reduced. Finally, because BEP’s past efforts to encourage domestic competition for currency paper have been unsuccessful and future efforts are uncertain, and because BEP has not always been able to ensure fair and reasonable prices from the current supplier in some past procurements, additional authority may be necessary to protect the government’s interests in obtaining currency paper. Specifically, Congress may want to consider revising the Conte Amendment, which allows the Secretary of the Treasury to obtain currency paper from a foreign-owned source only if no domestic supplier is available, to permit the Secretary to authorize obtaining currency paper from a foreign-owned source on a temporary basis if it is determined that no domestic supplier will provide paper at fair and reasonable prices. Such a provision should improve the likelihood that fair and reasonable prices could be obtained. We provided copies of a draft of this report for comment to the Chairman of the Board of Governors of the Federal Reserve System, the Acting Director of BEP, the Secretary of the Treasury, and the Chief Executive Officer of Crane. On July 29, 1998, the Assistant to the Board of Governors of the Federal Reserve System provided oral comments on our draft report. He said the Federal Reserve considered the analysis and recommendations to be reasonable. We also received written comments from the Acting Director of BEP, dated July 29, 1998, which are reprinted in appendix VI; and we received written comments from Crane dated July 28, 1998, which are reprinted in appendix VII. According to BEP officials, the BEP comments included input from the Department of the Treasury. Finally, in a meeting with us on July 29, 1998, BEP provided a number of oral technical clarifications to our report that we made where appropriate. The Acting Director of BEP stated that our report does not recognize that BEP complied with the FAR in the award of the five contracts we reviewed and provided comments on our recommendations. We did not make a comprehensive assessment of BEP’s compliance with FAR in connection with the five contracts we reviewed and thus are not in a position to make an overall statement on BEP’s compliance. Our draft report included recommendations that BEP (1) consider amending solicitation 97-13 and future solicitations to provide financial assistance if deemed to be economically advantageous to the government; and (2) consider excluding Crane from some or all of BEP’s currency paper requirements, as an example of a strategy to establish an alternative source. In its comments, BEP endorsed the idea of providing financial assistance but did not agree with amending solicitation 97-13 because it believes solicitation 97-13 provided for financial assistance. In addition, BEP said that Treasury is currently studying the future demand for currency, and once the study is completed, BEP will be in a better position to assess the cost reduction potential associated with developing additional suppliers. BEP also disagreed with our recommendation that Treasury consider excluding Crane from some or all of BEP’s currency paper requirements. BEP said that excluding Crane from competing for all of its requirements was not feasible because of the lack of an alternative domestic source; and excluding Crane from part of its requirements would not be practical or economically feasible, citing a previous determination that the price for currency paper could increase significantly due to the high capital investment cost for a potential new supplier. After carefully considering BEP’s comments as well as reconsidering the uncertainties we identified in our draft report, we agree with BEP that amending solicitation 97-13 to offer financial assistance and excluding Crane from all of its requirements could create difficulties for BEP in meeting its responsibilities to ensure an adequate supply of currency paper. We also agree with BEP that it should be in a better position to evaluate the feasibility of establishing additional suppliers after Treasury completes its ongoing study of future currency demand, which Treasury expects to be done soon. In fact, our draft report recognized that future currency paper demand was one of the factors that needed to be considered in deciding on the feasibility of additional suppliers. Accordingly, we deleted our recommendations to the Secretary aimed at encouraging competition to reflect BEP’s concerns, the uncertainties identified in our report, and because of Treasury’s ongoing effort to project future currency demand. However, we believe that future consideration by Treasury of additional measures to encourage competition may be appropriate after it finishes its study of future currency demand for a number of reasons. First, significant changes in future currency demand could affect the economic feasibility of establishing other suppliers. Second, BEP’s statement that it has determined that establishing another supplier would not be economically advantageous appears to be based on its 1996 currency paper study, which was done before BEP accepted higher prices for newly designed currency paper under contract 97-10; higher prices could affect the conclusions Treasury reached in its 1996 study. Third, Treasury’s report on its 1996 currency paper study did not fully address the economic feasibility of establishing a second supplier under different scenarios that would be possible if existing restrictions on the contract period or percentage of foreign ownership and control were changed. Regarding our recommendation to ensure the contractor maintains acceptable cost accounting and estimating systems, BEP said that it has audited the contractor’s cost accounting practices and will continue to do so. However, on July 29, 1998, BEP officials told us that they still had not obtained an audit of Crane’s estimating system. We believe that this should have been done earlier because the estimating system helps to ensure that cost proposals are based on reliable and consistent data. In reference to our recommendation to arrange for post-award audits for the contractor’s costs, BEP said that it had requested audits and that ongoing IG and DCAA investigations and audits occasionally interfered with timely post-award audits. We believe BEP should continue to pursue these audits because past efforts to follow up on obtaining post-award audits have not always been timely and because they help protect the government’s interests. With respect to our recommendations that solicitations reflect expected paper needs and that orders be evened out, allowing the supplier to maintain a steady production level, BEP agreed that improvements were needed and says it has taken corrective actions to ensure that the quantities bought under contract 97-10 represent actual requirements. We believe these actions are a step in the right direction and should be continued in future orders of currency paper. BEP disagreed with our recommendation that it make more extensive use of cost and price analysis. BEP pointed out that in its two most recent contracts, it had applied a number of cost analysis techniques. Our draft report recognized that BEP had done more cost analysis on contract 97-10 than had been done in previous contracts. However, BEP did not do adequate price analysis for any of the five contracts we reviewed, including 97-10, and did not do adequate analysis to support the profit levels it accepted. Accordingly, we modified our recommendation to address the need for greater analysis of proposed profit levels. Regarding our recommendation to collect pricing data from foreign countries, BEP said it would continue to try to obtain foreign country currency paper data. We added some language to the report to clarify how this might be done. With respect to the related suggestion that BEP collect pricing data on similar supplies purchased by other agencies, such as passport and money order paper, BEP said it believed comparison of currency paper prices to passport and money order paper would not produce any meaningful information. BEP said these papers are different from currency paper. Our report recognizes that although comparisons of these types of papers would not provide a basis for a definitive conclusion, they may provide some insight for assessing pricing trends. BEP said it agreed with our recommendation to obtain royalty-free data rights to future security measures. BEP pointed out that the cost of such royalties for the security thread is less than 0.2 percent of the cost of the currency paper contract. However, BEP did not address the effect these patents had on its 1997 competitive solicitation or could have on future solicitations. As discussed in chapter 2, several paper manufacturers stated that the requirements to pay a royalty license to use the data and process for insertion of the security thread made it difficult for them to compete. One paper manufacturer filed a protest with BEP over the security thread license and said that the solicitation places potential offerors in a position of violating a patent held by Crane if they supply currency paper containing security thread made to BEP’s specifications. In response to this protest, BEP agreed to provide the security thread as government-furnished property. Crane provided very lengthy comments on many of the issues addressed in this report. Our specific responses to the comments are included in appendix VII. In general Crane said that although it agreed with many of our factual findings, it disagreed with most of our recommendations and one of our matters for consideration of Congress. Crane also suggested specific technical changes to clarify our report that we have made where appropriate. In objecting to our recommendations that BEP and Treasury take further steps to encourage competition in the supply of the nation’s currency paper, Crane said that they have already been adopted by BEP and no further action was necessary. Crane specifically objected to the recommendations in the draft report that BEP further consider options for providing financial assistance to other potential suppliers and that BEP consider excluding Crane from all or some of its currency paper requirement to encourage participation by other potential suppliers. While these strategies are permitted under law, Crane said that they would result in higher costs and possible disruptions to the supply of currency paper. As we explain in response to BEP’s concern about these recommendations, we acknowledge and stress in the report that the impact of alternative strategies is uncertain and that many factors would have to be weighed in considering any option. In light of BEP’s concerns and to recognize the uncertainty involved, we have deleted the recommendations proposed in our draft report to encourage Treasury and BEP to further consider the feasibility and advisability of additional measures to encourage competition. Crane agreed with our suggestion to Congress that consideration be given to modifying the 4-year limit on currency paper contracts. However, Crane opposed our further suggestion to Congress that the Secretary of the Treasury be given additional authority to acquire currency paper from foreign-owned firms in the event that fair and reasonable prices cannot be obtained from a domestic source. We can understand Crane’s position on this matter, since it believes that its prices have been fair and reasonable, and that the alternative of acquiring currency paper from a foreign source is not necessary. However as our report clearly states, there have been occasions in the past in which BEP has not been able to determine that Crane’s prices were fair and reasonable, but the lack of other domestic suppliers and the current restriction prohibiting acquiring currency paper from foreign-owned sources unless no domestic source exists has limited the negotiating strategies. For these reasons, we continue to believe that Congress should consider limited expansion of the Secretary’s authority.
Pursuant to a legislative requirement, GAO provided information on the: (1) optimum circumstances for the procurement of distinctive currency paper; (2) effectiveness of the Bureau of Engraving and Printing's (BEP) efforts to encourage competition in the procurement of currency paper; (3) fairness and reasonableness of prices paid for currency paper by BEP and the quality of the paper purchased; and (4) potential for disruption to the U.S. currency paper supply from BEP's reliance on a single source. GAO noted that: (1) the optimum circumstances for the procurement of distinctive currency paper would include an active, competitive market for such paper, where a number of responsible sources would compete for BEP's requirements; (2) however, these circumstances have not existed because of the unique market for currency paper and some statutory restrictions; (3) BEP has been aware of the need to increase competition and has made some efforts recently to do so in areas under its control; (4) however, BEP must procure currency paper within the current statutory framework, which limits currency paper contracts to 4 years, prohibits currency paper production outside of the United States, and prohibits purchase of currency paper from foreign-owned or controlled entities; (5) of the 20 paper manufacturers that responded to GAO's survey, 12 said they were interested in and have the capability now, or could be made capable in the near future, of supplying at least part of BEP's currency paper needs if existing statutory requirements and some of BEP's solicitation terms were changed; (6) 7 of the 12 are domestic paper manufacturers, and 5 are located in foreign countries; (7) although the long-term relationship between BEP and Crane & Co., Inc. has historically resulted in quality currency paper, BEP was unable to determine that it had obtained fair and reasonable prices for 13 of the 17 contract actions awarded from 1988 to 1997; (8) BEP sometimes accepted prices even though it was unable to determine that they were fair and reasonable because it had no other source for currency paper; (9) GAO believes that BEP's assessments of the fairness and reasonableness of Crane's proposed prices were hampered by a number of factors, including the lack of market prices for currency paper and the limited analyses of proposed costs and prices it performed; (10) as the government's agent for acquiring currency paper, BEP is responsible for ensuring that the government's supply of paper is not disrupted; (11) although the potential for disruption in the supply of currency paper exists, there have been no such disruptions; (12) however, for many years, because BEP did not maintain a reserve inventory of paper to provide for contingencies, it was more vulnerable to adverse consequences if a disruption had occurred and was at a disadvantage in its contract negotiations because it lacked an alternative source for currency paper; and (13) BEP has recently been purchasing paper to build a 3-month reserve supply and, under the Conte Amendment, could buy paper from a foreign source if no domestic source exists.
As part of our audit of the fiscal years 2008 and 2007 CFS, we evaluated the federal government’s financial reporting procedures and related internal control. Also, we determined the status of corrective actions by Treasury and OMB to address open recommendations relating to the processes used to prepare the CFS detailed in our previous reports. In our audit report on the fiscal year 2008 CFS, which is included in the fiscal year 2008 Financial Report of the United States Government (Financial Report), we discussed the material weaknesses related to the federal government’s processes used to prepare the CFS. These material weaknesses contributed to our disclaimer of opinion on the accrual basis consolidated financial statements and also contributed to our adverse opinion on internal control. We performed sufficient audit procedures to provide the disclaimer of opinion on the accrual basis consolidated financial statements in accordance with U.S. generally accepted government auditing standards. This report provides the details of the material weaknesses identified during the fiscal year 2008 audit that relate to the processes used to prepare the CFS and our recommendations to correct these weaknesses, as well as the status of corrective actions by Treasury and OMB to address recommendations from previous reports. We requested comments on a draft of this report from the Director of OMB and the Secretary of the Treasury or their designees. OMB provided oral comments, which are summarized in the Agency Comments and Our Evaluation section of this report. Treasury’s comments are reprinted in appendix II and are also summarized in the Agency Comments section. Treasury did not establish policies and procedures to provide assurance that federal agencies’ intragovernmental payroll tax amounts are identified and eliminated at the governmentwide level when compiling the CFS. Consolidated financial statements are intended to present the results of operations and financial position of all of the components that make up the reporting entity as if the entity were a single enterprise. Therefore, when preparing the CFS, Treasury should ensure that intragovernmental activity and balances between federal agencies are eliminated. Federal agencies, as well as other employers, are required to pay, among other taxes, a matching amount of Social Security and Medicare taxes for their employees (payroll taxes). Federal agencies’ payments of payroll taxes to the Internal Revenue Service represent intragovernmental transactions. If these amounts are not eliminated at the governmentwide level when compiling the CFS, revenues and expenses become overstated in the CFS. However, in disclosing the types of revenues included in the Statement of Operations and Changes in Net Position in the draft CFS, Treasury’s description stated that “individual income tax and tax withholdings include….payroll taxes collected from other agencies.” We inquired of Treasury as to why these amounts would be included in the CFS and not eliminated during the preparation process. Treasury subsequently deleted the language regarding the inclusion of federal agency payroll taxes from the CFS. However, Treasury was unable to provide any documentation demonstrating that these amounts were appropriately classified as intragovernmental transactions and eliminated from the CFS. Without adequate policies and procedures to accurately identify and eliminate intragovernmental payroll tax amounts in the process used to prepare the CFS, the federal government’s ability to determine the impact of these amounts on the CFS is impaired and, consequently, the CFS may be misstated. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to design, document, and implement policies and procedures to identify and eliminate intragovernmental payroll tax amounts at the governmentwide level when compiling the CFS. Treasury, in coordination with OMB, did not take the steps necessary to help assure that certain key information related to significant financial events and conditions were consistent and accurately presented throughout the fiscal year 2008 Financial Report. Specifically, Treasury, in coordination with OMB, has not fully established an effective process for preparing and reviewing information included in the Management’s Discussion and Analysis (MD&A), and “The Federal Government’s Financial Health: A Citizen’s Guide to the Financial Report of the United States Government” (Citizen’s Guide) sections of the Financial Report. According to Statement of Federal Financial Accounting Standards No. 15, Management’s Discussion and Analysis, the MD&A should highlight key information and increase the understanding and usefulness of the Financial Report. Similarly, the Citizen’s Guide is intended to provide readers with a brief, high-level summary of key financial information from the Financial Report on our nation’s current fiscal condition and the long- term sustainability. Further, data presented in both the MD&A and Citizen’s Guide must be consistent with related data in the CFS. Treasury, in coordination with OMB, performed certain procedures to prepare and review the MD&A and Citizen’s Guide. However, these procedures were not effective in helping assure that (1) information was consistently reported in the CFS and these related sections of the Financial Report and (2) information reported in the MD&A and Citizen’s Guide was consistent, complete, and accurate. During our comparison of information reported in draft versions of the MD&A and Citizen’s Guide with information reported in the fiscal year 2008 draft CFS, we identified (1) several inconsistencies and (2) numerous instances in which information was omitted from, or incorrectly reported in, these draft sections of the Financial Report that were not detected by Treasury’s review process. For example, information in the draft MD&A and Citizen’s Guide regarding certain federal actions for addressing the financial crisis was incomplete or incorrectly reported. In addition, the $339 billion change in veterans benefit liability in fiscal year 2008 reported in the draft CFS was incorrectly reported as $365 billion in the draft versions of the MD&A and Citizen’s Guide. We communicated our findings to Treasury officials who corrected the data presented in the MD&A and Citizen’s Guide sections of the final Financial Report. Without effective procedures for preparing and reviewing the MD&A and Citizen’s Guide to ensure that the information is complete, accurate, and consistent with the information reported in the CFS, Treasury is at risk that information provided in these key sections of the Financial Report will not be reliable. A contributing factor to the reporting errors and inconsistencies we detected is that Treasury does not have documented procedures for preparing and reviewing the MD&A and Citizen’s Guide sections of the Financial Report in comparison with data presented in the CFS. As preparer of the Financial Report, Treasury management, in coordination with OMB, is responsible for developing and documenting detailed policies, procedures, and practices and for ensuring that internal control is built into and is an integral part of operations to ensure that information is consistent and accurate throughout the Financial Report. GAO’s Standards for Internal Control in the Federal Government calls for clear documentation of policies and procedures. Although, Treasury has documented policies and procedures used to compile the CFS in its Standard Operating Procedures (SOP) entitled “Preparing the Financial Report of the U.S. Government,” the SOP does not provide procedures for preparing and reviewing the MD&A and Citizen’s Guide—two key report sections providing information to the Congress and the public regarding the fiscal condition of the U.S. government—to help assure they are consistent and accurate in comparison with related information presented elsewhere in the Financial Report. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to develop, document, and implement processes and procedures for preparing and reviewing the MD&A and Citizen’s Guide sections of the Financial Report to help assure that information reported in these sections is complete, accurate, and consistent with related information reported elsewhere in the Financial Report. Treasury, in coordination with OMB, has not established and documented criteria for identifying which federal entities are significant to the CFS for purposes of verifying and validating the information submitted by federal entities for inclusion in the CFS. Treasury, through the Treasury Financial Manual (TFM), identified 35 significant federal agencies and entities, referred to as “verifying agencies.” Those agencies are required to perform a number of procedures to provide audit assurance over the information submitted to Treasury for the CFS. However, Treasury and OMB have not (1) established and documented criteria for designating federal entities as “verifying agencies” significant to the CFS, and (2) established policies and procedures for assessing and documenting, on an annual basis, which entities meet the criteria. Treasury, in coordination with OMB, is required to prepare the CFS. According to the Federal Accounting Standards Advisory Board’s Statement of Federal Financial Accounting Concepts No. 4, Intended Audience and Qualitative Characteristics for the Consolidated Financial Report of the United States Government, the consolidated financial report should be a general purpose report that is aggregated from federal agencies’ and other federal entities’ financial reports. The TFM provides policies and procedures on how federal agencies are to provide their financial data to Treasury for consolidation. In accordance with the TFM, verifying agencies are required to submit their financial data to Treasury using a Closing Package. The verifying agency’s Chief Financial Officer must certify the accuracy of the data in the Closing Package and have the Closing Package audited by the agency’s Inspector General. In addition, the Closing Package process requires verifying agencies to reclassify their audited financial statements to the Closing Package “special purpose financial statements.” Verifying agencies must also identify trading partners and enter certain financial statement notes. The special purpose financial statements are audited to obtain reasonable assurance about whether the financial statements are (1) free of material misstatements, (2) in conformity with accounting principles generally accepted in the United States, and (3) presented pursuant to the requirements of the TFM. Because the Closing Package process requires verifying agencies to verify and validate the information in the special purpose financial statements with their audited information and receive an audit opinion, Treasury is provided a level of assurance that it is compiling the CFS with audited financial information. All other federal entities that contribute financial information to the CFS are classified by Treasury as “nonverifying agencies.” Over 100 nonverifying federal agencies and entities submitted data for fiscal year 2008. Currently these entities are only required to submit adjusted trial balance data to Treasury instead of an audited Closing Package. Because of a lack of criteria for determining an entity’s significance to the CFS, it is unclear whether any of these “nonverifying agencies” should be classified as “verifying agencies.” One of Treasury’s and OMB’s goals for preparing the CFS is to link the agencies’ audited financial statements to the CFS. To accomplish this goal, Treasury needs an appropriate level of assurance that it compiles the CFS using audited Closing Packages from the federal entities contributing the most significant amounts of financial information. However, without establishing the criteria for identifying federal entities as significant to the CFS and establishing related policies and procedures to assess, on an annual basis, which entities meet such criteria, Treasury and OMB cannot obtain this level of assurance. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to (1) establish and document criteria to be used in identifying federal entities as significant to the CFS for purposes of obtaining assurance over the information being submitted by those entities for the CFS and (2) develop and implement policies and procedures for assessing and documenting, on an annual basis, which entities met such criteria. These actions will help provide Treasury and OMB with assurance that the information being used to prepare the CFS is consistent with the audited financial statements of the federal agencies, in all material respects. During fiscal year 2008, Treasury enhanced its SOP entitled “Preparing the Financial Report of the U.S. Government” to require an overall analysis of the consolidated numbers in the financial statements to include a review for reasonableness of changes from the prior year to the current year. However, because of a lack of details on the objectives of the analysis and the procedures to be performed, the overall analysis did not detect significant errors in amounts used to prepare the Statements of Net Cost (SNC). Internal control should provide, among other things, reasonable assurance that financial reporting is reliable. GAO’s Standards for Internal Control in the Federal Government defines the minimum level of quality acceptable for internal control in the federal government and provides the standards against which internal control is to be evaluated. These standards state that internal controls should include, among other items, reviews by management at the functional or activity level. Treasury categorizes and allocates costs in the SNC by agency. For example, most of the costs associated with pension and health benefits that are reported by the Office of Personnel Management (OPM) in its financial statements are allocated to the costs of OPM’s federal user agencies for governmentwide federal reporting purposes. Treasury uses head count figures reported by OPM in its Closing Package to perform the allocation of pension and health benefit costs across all user federal agencies. However, we found that Treasury did not detect a significant variance in head count between certain federal entities from 2007 to 2008, which resulted in significant errors in the draft SNC. Specifically, we found that, in fiscal year 2007, the head count used for the Department of Defense (DOD) was 497,724, and the head count used for “all other entities” was 92,566. In fiscal year 2008, we found that the head counts were erroneously reversed. The head count used for DOD was 95,157, while the head count used for “all other entities” was 495,673. Treasury’s review process and overall analysis did not detect this error. As a result, Treasury’s draft SNC understated DOD’s reported costs on the fiscal year 2008 SNC by approximately $10 billion and costs for the “all other entities” line item was equally overstated. Without sufficiently detailed procedures including guidance for performing the analysis and review of data used in the allocation process for compiling the SNC, significant errors could occur in the SNC and not be detected. We reaffirm our recommendation that the Secretary of the Treasury direct the Fiscal Assistant Secretary to further enhance the SOP entitled “Standard Operating Procedures for Preparing the Financial Report of the U.S. Government” to better ensure that CFS compilation practices are proper, complete, and can be consistently applied, including detailed procedures for conducting reviews and documenting reasonableness of data used in the process for compiling the CFS. In oral comments on a draft of this report, OMB stated that it generally concurred with the new findings and related recommendations in this report. In addition, OMB provided technical comments, which we have incorporated as appropriate. In its April 15, 2009, written comments on a draft of this report, which are reprinted in appendix II, Treasury stated that it concurs with the new recommendations and expects to implement them by the end of fiscal year 2009. We will evaluate the actions taken to address our recommendations as part of our fiscal year 2009 CFS audit. This report contains recommendations to the Secretary of the Treasury. The head of a federal agency is required by 31 U.S.C. § 720 to submit a written statement on actions taken on these recommendations. You should submit your statement to the Senate Committee on Homeland Security and Governmental Affairs and the House Committee on Oversight and Government Reform within 60 days of the date of this report. A written statement must also be sent to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of the report. We are sending copies of this report to the Chairman and Ranking Member of the Senate Committee on Homeland Security and Governmental Affairs and its Subcommittee on Federal Financial Management, Government Information, Federal Services, and International Security and the Chairman and Ranking Member of the House Committee on Oversight and Government Reform and its Subcommittee on Government Management, Organization, and Procurement. In addition, we are sending copies to the Fiscal Assistant Secretary of the Treasury, the Director of OMB, the Deputy Director for Management of OMB, and the Acting Controller of OMB’s Office of Federal Financial Management. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. We acknowledge and appreciate the cooperation and assistance provided by Treasury and OMB during our audit. If you or your staff have any questions or wish to discuss this report, please contact me (202) 512-3406 or engelg@gao.gov. Key contributors to this report are listed in appendix III. This appendix includes the status of recommendations from the following six reports that were open at the beginning of our fiscal year 2008 audit: Financial Audit: Process for Preparing the Consolidated Financial Statements of the U.S. Government Needs Improvement, GAO-04-45 (Washington, D.C.: Oct. 30, 2003); Financial Audit: Process for Preparing the Consolidated Financial Statements of the U.S. Government Needs Further Improvement, GAO-04-866 (Washington, D.C.: Sept. 10, 2004); Financial Audit: Process for Preparing the Consolidated Financial Statements of the U.S. Government Continues to Need Improvement, GAO-05-407 (Washington, D.C.: May 4, 2005); Financial Audit: Significant Internal Control Weaknesses Remain in Preparing the Consolidated Financial Statements of the U.S. Government, GAO-06-415 (Washington, D.C.: Apr. 21, 2006); Financial Audit: Significant Internal Control Weaknesses Remain in the Preparation of the Consolidated Financial Statements of the U.S. Government, GAO-07-805 (Washington, D.C.: July 23, 2007); and Financial Audit: Material Weaknesses in Internal Control over the Processes Used to Prepare the Consolidated Financial Statements of the U.S. Government, GAO-08-748 (Washington, D.C.: June 17, 2008). Recommendations from these reports that were closed in prior years are not included in this appendix. This appendix includes the status of the 56 remaining open recommendations, according to the Department of the Treasury (Treasury) and the Office of Management and Budget (OMB), as well as our own assessments. Explanations are included in the status of recommendations per GAO when Treasury and OMB disagreed with our recommendation or our assessment of the status of a recommendation. We will continue to monitor Treasury’s and OMB’s progress in addressing GAO’s recommendations. Of the 56 recommendations relating to the processes used to prepare the consolidated financial statements of the U.S. government (CFS) that are listed in this appendix, 16 were closed and 40 remained open as of December 9, 2008, the date of our report on the audit of the fiscal year 2008 CFS. In addition to the above contact, the following individuals made key contributions to this report: Louise DiBenedetto, Assistant Director; Lynda Downing, Assistant Director; Cole Haase; Dragan Matic; Maria Morton; Thanomsri Piyapongroj; and Taya Tasse.
Since GAO's first audit of the fiscal year 1997 consolidated financial statements of the U.S. government (CFS), material weaknesses in internal control and other limitations on the scope of our work have prevented GAO from expressing an opinion on the accrual basis CFS. Certain of those material weaknesses relate to inadequate systems, controls, and procedures to properly prepare the CFS. The purpose of this report is to (1) provide details of the continuing material weaknesses related to the preparation of the CFS, (2) recommend improvements, and (3) provide the status of corrective actions taken to address the 56 open recommendations GAO reported for this area in June 2008. During its audit of the fiscal year 2008 CFS, GAO identified continuing and new control deficiencies in the federal government's processes used to prepare the CFS. These control deficiencies contribute to material weaknesses in internal control over the federal government's ability to (1) adequately account for and reconcile intragovernmental activity and balances between federal agencies; (2) ensure that the CFS was consistent with the underlying audited agency financial statements, properly balanced, and in conformity with U.S. generally accepted accounting principles; and (3) identify and either resolve or explain material differences between components of the budget deficit reported in the Department of the Treasury's records, used to prepare the Reconciliation of Net Operating Cost and Unified Budget Deficit and Statement of Changes in Cash Balance from Unified Budget and Other Activities, and related amounts reported in federal agencies' financial statements and underlying financial information and records. The control deficiencies GAO identified involved: (1) establishing and documenting policies and procedures for identifying and eliminating federal agencies' intragovernmental payroll tax amounts when compiling the CFS, (2) establishing and documenting policies and procedures for preparing and reviewing information included in key sections of the Financial Report of the U.S. Government, (3) establishing criteria for identifying federal entities' significance to the CFS and annually assessing which entities meet such criteria, (4) enhancing procedures for analyzing and reviewing data used when compiling the Statements of Net Cost, and (5) various other control deficiencies identified in previous years' audits. Of the 56 open recommendations GAO reported in June 2008, 16 were closed and 40 remained open as of December 9, 2008, the date of GAO's report on its audit of the fiscal year 2008 CFS. GAO will continue to monitor the status of corrective actions taken to address the 4 new recommendations as well as the 40 open recommendations from prior years.
Money laundering is the disguising or concealing of illicit income in order to make it appear legitimate. Over the past two decades, federal law enforcement efforts to detect money laundering have evolved into a strategy that is heavily dependent upon the reporting of large currency transactions and tactical and strategic intelligence analysis of the collected data. In 1988 the Department of the Treasury began to encourage banks and other financial institutions to supplement reports of large currency transactions with reports of suspicious transactions of any amount. Since then, the suspicious transaction reports have taken on a number of different formats that are filed with various law enforcement and regulatory agencies at both the state and federal levels. This report describes how these reports are made and how they are being used. Federal law enforcement officials estimate that between $100 billion and $300 billion in U.S. currency is laundered each year. While narcotics traffickers are the largest single block of users of money laundering schemes, numerous other types of activities typical of organized crime—for example, illegal gambling or prostitution—create an appreciable demand. In addition, violations of tax laws often accompany laundering schemes that conceal the existence of an illegal source of income. Money laundering is also a factor in many cases of tax fraud involving income from a legitimate source. Although the process of money laundering has been broken down into a number of steps, it is generally agreed by law enforcement and regulatory officials that the point at which criminals are most vulnerable to detection is “placement.” Placement is the concealing of illicit proceeds by converting the cash to another medium that is more convenient or less suspicious for purposes of exchange, such as property, cashier’s checks, or money orders; or depositing the funds into a financial institution account for subsequent disbursement. Because of the problems associated with converting and concealing large amounts of cash, placement is perhaps the most difficult part of money laundering and is currently the primary focus of U.S. law enforcement, legislative, and regulatory efforts to attack money laundering. Federal efforts to detect large cash deposits were significantly enhanced with the passage of the Bank Secrecy Act in 1970. The act requires individuals as well as banks and other financial institutions to report large foreign and domestic financial transactions to the Department of the Treasury. The act has been amended to provide substantial criminal and civil penalties for institutions who fail to file the required reports and for individuals who deliberately evade certain reporting requirements. Although the implementing regulations of the act require four types of reports, the report filed most frequently is the Currency Transaction Report (CTR). Financial institutions are required to file a CTR for each deposit, withdrawal, exchange of currency, or other payment or transfer, by, through, or to such institutions that involves a transaction in currency of more than $10,000. CTRs are filed on an Internal Revenue Service (IRS) form 4789, which is to be sent to the IRS Detroit Computing Center in Michigan. The volume of CTRs being filed has increased substantially in the past several years. In May 1993 we testified before the House Banking Committee that since 1987 the annual filings of CTRs had increased at an average rate of 12.7 percent. Increased efforts by federal regulatory and law enforcement agencies, as well as enhanced cooperation by the banks themselves, have significantly improved bank compliance with the reporting requirements. The substantial increase in the volume of currency transaction reports being filed has increased the importance of identifying those transactions thought to be suspicious. Although U.S. financial institutions have been reporting suspected money laundering for a number of years, specific criteria for determining whether a transaction is suspicious have never been developed. Consequently, institutions generally have a wide degree of latitude in deciding what constitutes suspicious activity. Financial institutions have developed a number of means designed to help ensure that they are not being used to launder illicit proceeds. Chief among these is a policy commonly referred to as “know your customer.” Among other things, the policy calls for financial institutions to verify the identity of individuals and businesses that are account holders and to be familiar enough with their banking practices so that transactions that are outside the norm can be readily identified. Officials from the Department of the Treasury and the American Bankers Association told us that most, if not all, financial institutions have implemented a know your customer policy and treat any transaction not typically associated with an account as suspicious. Moreover, guidance from regulatory agencies generally encourages institutions to use the policy in this manner. Although suspicious activity generally depends upon the customer, certain types of transactions are suspicious in and of themselves. A common type of suspicious transaction is structuring. Structuring occurs when a person conducts currency transactions in amounts of $10,000 or less for the purpose of evading the reporting requirements of the Bank Secrecy Act. In September 1992 the Association of Reserve City Bankers (now known as the Bankers Roundtable) published the results of a survey of suspicious transaction reporting by the nation’s major banking institutions. The report included more than 200 profiles of suspicious transactions that had been reported by 60 of the nation’s largest banking institutions. The majority of the transactions that were reported as suspicious (85 percent) involved structuring. The report found that the most common method of structuring involved cash deposits but also included check cashing, cash withdrawals, and the purchase of monetary instruments. Other transactions that were reported as suspicious included customers changing the dollar amount of the transaction or cancelling the transaction when informed of the reporting requirement, unusually large purchases of money orders and cashier’s checks, unusually large cash deposits, and wire transfers of funds to a foreign country. The Department of the Treasury has identified 8 countries in addition to the United States that, as of July 1993, require the reporting of currency transactions that exceed a specified amount. However, many other countries require the recording of transactions over some specified threshold. These records can then be made available to law enforcement under the terms of that country’s bank secrecy laws. Many countries also either require or encourage financial institutions to report those transactions considered to be suspicious. In the United States, financial institutions have been encouraged for some time to report suspicious account activity that might be indicative of criminal activity. However, certain provisions in the Right to Financial Privacy Act (P.L. 95-630) of 1978 generated questions in the banking community about the type of customer information that could be disclosed in reporting a suspicious transaction, as well as concerns of potential liability for such disclosure. Subsequent legislation addressed these issues by, among other things, providing certain protections against civil liability for institutions reporting suspicious transactions. The Money Laundering Control Act of 1986 (P.L. 99-570) amended the Right to Financial Privacy Act to explicitly define the specific types of account information that financial institutions could disclose without customer permission, subpoena, summons, or search warrant. The intent was to strike a balance between the privacy rights of customers while allowing financial institutions to give government investigators enough information about the nature of possible violations in order for such investigators to determine whether there was a basis to proceed with a summons, subpoena, or search warrant for additional information. The 1986 amendments also established a limited “good faith” defense whereby financial institutions and their employees, when making a disclosure of certain specified information, would be shielded from civil liability to the customer for such disclosure or for any failure to notify the customer of such disclosure. Despite this provision, many banks were concerned that they might still be liable under the Right to Financial Privacy Act for disclosures made on a voluntary basis. Nothing in the statutory language required a financial institution to initiate a disclosure to a government agency of a suspected transaction, and some questioned whether the government would intervene on their behalf should a civil action be initiated against them. This situation was remedied, to some extent, by the promulgation of regulations by the Comptroller of the Currency and other federal agencies charged with the responsibility to monitor U.S. financial institutions. Comptroller of the Currency Regulation 12 C.F.R. Section 21.11 and corresponding regulations issued by the other bank regulatory agencies now require financial institutions to report suspected money laundering. Nonetheless, there was still concern over the possibility of civil suits because of reporting suspicious transactions. In 1992, under the Annunzio-Wylie Anti-Money Laundering Act (P.L. 102-550), financial institutions and their employees reporting suspicious transactions were given broadened immunity from civil liability under any state or federal law or regulation, such as the Right to Financial Privacy Act. The act also prohibits financial institutions from notifying persons involved in a suspicious transaction that the transaction has been reported. We were requested by the then Chairman of the Permanent Subcommittee on Investigations, Senate Governmental Affairs Committee, to review the manner in which suspicious activities that relate to possible money laundering are reported by determining how banks and other financial institutions report suspicious transactions, to whom the transactions are reported, the volume of reports made, how the reports are used, and whether the process can be improved. To respond to the request, we reviewed pertinent laws and regulations and published material such as academic and periodical literature. We also reviewed reports prepared by federal and state agencies, private research associations, and other experts. We interviewed officials at the Internal Revenue Service, the Department of the Treasury, the Federal Reserve Board, and the American Bankers Association. We also used the results of our previous reports dealing with money laundering that are cited in the text. In order to determine the volume and characteristics of suspicious transaction reports filed on Currency Transaction Reports, we used data from the computer database at the IRS Detroit Computing Center and relied upon IRS for the necessary computer programming. At our request, IRS identified the 20 institutions that filed the largest volume of Currency Transaction Reports that had been marked suspicious in calendar year 1993. In order to determine what factors might influence some institutions to mark a high percentage of CTRs as suspicious, we contacted the seven institutions of these 20 that had marked more than 8 percent of the CTRs filed as suspicious. This percentage was arbitrarily selected and has no statistical basis. To ascertain what states require suspicious transaction reporting and how the reports are used, we telefaxed a single-page questionnaire to bank regulatory officials in each state. All of the states responded. For those states that indicated there was a requirement, we conducted telephone interviews with regulatory and law enforcement personnel. We interviewed officials and observed operations at the state facility in Sacramento, California, that processes data for that state’s Office of the Attorney General. We chose this one state operation to visit because of its proximity to San Francisco, California, where we were reviewing IRS district operations. In order to determine the extent to which suspicious transaction reports are used to initiate investigations by IRS’ Criminal Investigation Division (CID), we used data provided us from a management information system at IRS headquarters. We visited or contacted by telephone a total of 10 CID district offices. The San Francisco office was selected because of its recognized role as an innovator in using suspicious transaction reports. The other district offices were judgmentally selected so as to include offices that had initiated a relatively high percentage of cases based on suspicious transaction reports as well as those that had initiated a low percentage. We provided a draft of this report to the American Bankers Association, Treasury’s Financial Crimes Enforcement Network, and IRS. Their comments are discussed on pages 37 and 38 and reproduced in full in appendixes I, II, and III. We did our review from April through December 1994 in accordance with generally accepted government auditing standards. Over the past several years, different forms have been developed for financial institutions to use in reporting transactions that might involve money laundering. Each of these forms has evolved from a recognized need, but the forms differ as to the amount and detail of the information provided and where the form is filed. Because of the concurrent development and implementation of the forms, the reports overlap one another. Consequently, the same suspicious activity may be reported two or more times, on two or more different forms, and to several different agencies. This chapter describes how suspicious transactions are reported and to whom they are reported. Chapter 3 discusses how the various reports are used by different law enforcement agencies. As previously discussed, financial institutions are required to report certain transactions that exceed $10,000 on a Currency Transaction Report (CTR). Beginning in 1990, CTRs have also been used by some institutions to identify suspicious transactions. Although this means of identifying suspicious transactions produces the largest volume of reports, most financial institutions do not use the CTR form to report suspicious transactions. Using the CTR for this purpose does not provide any information about the nature of the suspicious activity. Moreover, the validity of some of the suspicious transaction reports filed on a CTR is questionable because some have been filed erroneously. After it had received inquiries from financial institutions about whether suspicious transactions should be reported and what information should be reported, the Department of the Treasury issued Administrative Ruling 88-1 on June 22, 1988. The ruling encourages but does not require financial institutions to report those transactions that might be “...relevant to a possible violation of the Bank Secrecy Act or its regulations or indicative of money laundering or tax evasion” to the local Criminal Investigation Division (CID) office of the Internal Revenue Service. Immediately after Administrative Ruling 88-1 was released, Treasury officials began to notice that financial institutions were reporting suspicious transactions by filing CTRs with the word “suspicious” written across the form. To facilitate this type of reporting, Treasury issued a revised CTR form in January 1990 with a block that could be checked to indicate that the transaction was suspicious. In addition, the instructions for the form were amended to read “This form may be filed for any suspicious transaction, even if it does not exceed $10,000.” Although the revised form makes it possible to identify transactions that have been designated as suspicious, the form does not provide for a description of the transaction. Consequently, there is no way to determine why the transaction was considered suspicious. All CTRs are to be filed with the IRS Detroit Computing Center, where they are processed and entered onto a computer database along with other reports required by the Bank Secrecy Act. Once a week, staff at the Center are to distribute copies of those CTRs that are marked suspicious to the CID district office that has jurisdiction over the state where the CTR was filed. In 1993 more than 10 million CTRs were filed, 63,536 of them marked suspicious. As figure 2.1 demonstrates, the number of suspicious CTRs filed since 1990 has remained relatively constant despite a substantial increase in the volume of CTRs filed. Of the 35,131 institutions filing CTRs in 1993, about 75 percent identified themselves as banks, credit unions, or savings and loan associations. Of these 26,029 financial institutions, only 4,473—about 17 percent—marked 1 or more of the CTRs they filed as suspicious. Overall, less than 1 percent of the CTRs filed by all financial institutions were marked suspicious. Table 2.1 provides additional information on institutions that filed CTRs and suspicious CTRs in 1993. Table 2.2 provides additional data on the suspicious CTRs filed in 1993. We discussed suspicious transaction reporting with officials of the two largest (ranked by total assets) banks in the country. Both banks have a policy of not filing suspicious CTRs. The reasons given for this policy were a concern over inadequate internal review and evaluation of the reports, possible civil liability for violating a customer’s right to privacy, and the lack of space on the form to describe why the transaction was considered to be suspicious. Although a total of 5,138 institutions filed suspicious CTRs in 1993, over a quarter of the 63,536 suspicious CTRs filed were filed by 20 institutions. At the request of the Department of the Treasury, we are not revealing the identity of these institutions. However, table 2.3 provides additional information concerning these institutions. In order to determine the factors that influence some institutions to mark a high percentage of CTRs as suspicious, we contacted the seven institutions that had marked more than 8 percent of CTRs filed as suspicious. This percentage was arbitrarily selected and has no statistical basis. We found a variety of reasons for why institutions filed suspicious CTRs, and we also identified several instances where suspicious CTRs were filed erroneously. The institution filing the largest number of suspicious CTRs—over 5 percent of those filed nationwide—is not a financial institution but a large corporation that provides money transmitting services at thousands of locations nationwide. Under procedures developed by the company, all transactions over a specified dollar threshold set by the company are to be monitored at a central location where the decision is made about whether or not to file a suspicious CTR. Company officials we spoke with told us that because of the nature of their business, they are inclined to regard many cash transactions as suspicious even though the amount might be relatively small compared to typical transactions at a financial institution. The eleventh largest filer of suspicious CTRs is also not a financial institution but a liquor store that operates a check cashing service. Staff from the store told us that they had been filing the suspicious CTRs erroneously because of incorrect instructions they had received. An IRS agent had informed them that a CTR was to be filed whenever a customer’s total transactions exceeded $10,000. (Although Treasury regulations do call for aggregating transactions, the time period specified is 1 business day.) The store maintains records on individual customers so that any time the transaction total exceeded $10,000, which may have taken several months or longer, a CTR would be filed on each subsequent transaction no matter what the amount was. According to store personnel, they had also been told by IRS to classify these CTRs as suspicious since none of the other transaction descriptions on the form were appropriate to describe the transaction. According to officials we spoke with at the institution filing the fifteenth largest volume of suspicious CTRs—a small bank—most, if not all, of its suspicious CTRs were filed at the request of IRS. We were told that IRS had informed the bank that an account holder was under investigation and that deposit activity for the account was generally under the $10,000 reporting threshold for a CTR. The bank officials also told us that IRS requested the bank to file a suspicious CTR for every transaction no matter how much the amount was so that IRS would be able to monitor the account activity. Consequently, the suspicious CTRs were not being filed because the bank considered the transactions to be suspicious but in order to allow IRS to monitor activity that would not otherwise be reported. The seventh largest filer of suspicious CTRs is a small bank located in one of the nation’s largest cities. We were told that after the bank initially determines that an account has had a single suspicious transaction, its policy is to file suspicious CTRs on all subsequent transactions for that account. Bank officials told us that many of the suspicious CTRs filed by the bank are likely to be erroneous since not all subsequent transactions might be considered suspicious. We were also told that the bank had been heavily fined in the past for Bank Secrecy Act violations. As proof of its willingness to comply with the spirit as well as the letter of the law, the bank has implemented a policy that encourages employees to file suspicious CTRs whenever there is any questionable activity. This “when in doubt, file” philosophy was echoed by the remaining three banks of the seven that we spoke with. Federal regulations require financial institutions to file Criminal Referral Forms or Reports of Apparent Crime (CRF) to report known or suspected crimes, such as credit card fraud, employee theft, and check kiting. In 1988 the activity to be reported was broadened to include suspected structuring of transactions to evade the CTR reporting requirements, other violations of the Bank Secrecy Act, and money laundering. (See ch. 1, p. 14) Each of the financial regulatory agencies requires its own form be used for the report. The different forms, however, provide essentially the same information about the identity of the reporting institution and the individual or business that is the subject of the report. Each form also differs substantially from the CTR in that each has space for a description of the transaction or activity that is being reported as suspicious. The directions for filing the reports require the financial institution to send the original to the cognizant regulatory agency and copies to the nearest office of the United States Attorney, the closest office of the Federal Bureau of Investigation, and the Department of the Treasury. The instructions also specify that when suspected money laundering and/or Bank Secrecy Act violations are being reported, a copy of the report is to be sent to the local office of the IRS Criminal Investigation Division. Table 2.4 shows the CRFs filed with the regulatory agencies, including those CRFs reporting suspected money laundering and/or Bank Secrecy Act violations. As discussed in chapter 1, the Money Laundering Control Act of 1986 amended the Right to Financial Privacy Act with provisions that authorized financial institutions to disclose certain specified account information. Recognizing the potential value of information and reports of suspicious transactions that could now be obtained from financial institutions, special agents with the IRS Criminal Investigation Division (CID) in the San Francisco, California, district office began a local initiative in 1987 to capitalize on the legislation. Under the initiative, financial institutions—primarily banks—in IRS’ western region were asked to report suspicious transactions directly to the local CID office. At first, the reports were taken over the telephone. As cooperation by the banks increased and the volume of telephone calls became difficult to manage, the financial institutions began filing the reports on a one-page form that was to be mailed to the CID district office. The form is shorter than the multipage Criminal Referral Form used by financial regulatory agencies but, similar to the CRF, has space for a narrative description of the suspicious nature of the transaction. Currently, even though financial institutions throughout the country use CTRs and/or CRFs to make suspicious transaction reports, some institutions in the western part of the country continue to file an additional report directly with the local CID district office. The reports are to be evaluated and researched at the district, sent to an IRS computer service center in California, transcribed onto computer tape, and mailed to the IRS Detroit Computing Center in Detroit, where they are to be put on a database. As of July 1994 a total of 68,111 reports had been filed with CID district offices in IRS’ western region—mostly with the San Francisco office. In calendar year 1993 a total of 91 financial institutions filed 20,940 reports with the CID district offices, again mostly with the San Francisco office. Some of the reports that were filed were copies of CRFs that were filed with the CID office in accordance with the filing instructions. Others, however, were the one-page form that some banks continue to use. IRS officials do not believe that any were copies of suspicious CTRs. Before federal agencies developed forms for reporting suspicious transactions, Arizona was using its own form. In 1985 the Arizona Attorney General’s Office developed a voluntary, informal reporting system relating to possible money laundering activity through financial institutions. Suspected money laundering and suspicious transactions were to be reported to the state Attorney General on a one-page form that requested identifying information concerning the customer and the nature of the transaction. By 1990 the state was receiving approximately 150 reports of suspicious transactions a month. In 1991 a state law was passed requiring any state or federally chartered institution to file with the state copies of various reports made to the Department of the Treasury. The state law also provides that the timely filing of a report with the appropriate federal agency shall be deemed compliance with the state requirements if such reports are already being supplied to the state. Arizona had been receiving copies of suspicious CTRs filed by state financial institutions since August 1989. Consequently, financial institutions were excused from filing copies of suspicious CTRs but were required to file copies of CRFs. The state Attorney General’s Office accepted a copy of a CRF filed in lieu of the state form. Officials with the Arizona Attorney General’s Office told us that in 1993 the state received an average of 300 reports of suspicious transactions a month, not including those copies of suspicious CTRs received on computer tape from IRS. About two-thirds of the reports were copies of CRFs filed with financial regulatory agencies. The remaining reports were made on the state form, which, we were told, some financial institutions used for situations they felt did not warrant a CRF. Information concerning suspicious transactions can be an effective means of identifying a wide variety of criminal activity. Even so, use of the information by law enforcement at federal and state levels is limited and inconsistent. No federal agency has been designated as responsible for developing and administering a program that would manage these resources with a focused, nationwide perspective. Although the Internal Revenue Service is the primary recipient of the reports, the use of the reports is a local initiative and varies among offices. Several states have recognized the value of the reports but their ability to use the information also differs because access to the data varies among the states. As discussed in chapter 2, district offices of IRS’ Criminal Investigation Division receive reports of suspicious transactions in several different ways. CID agents we spoke with at both the headquarters and district levels described suspicious transactions reports from financial institutions as extremely valuable intelligence leads. IRS does not keep records or data to measure the value of the reports. However, agents we spoke with at the field level related numerous examples of major investigations that had been initiated on the basis of suspicious transaction reports made by financial institutions. These examples include the following: In March 1994 a Texas funeral director was indicted along with three other individuals in U.S. District Court on charges that they accepted $4.9 million in drug proceeds during a 5-week period in 1989. The investigation originated when a banker became suspicious of large cash deposits being made into the account of the funeral home and telephoned the CID district office in Dallas. In June 1994 a technical engineer with the Bureau of Engraving and Printing in Washington, D.C., was arrested and charged with the theft of $1.7 million worth of newly printed hundred-dollar bills. Tellers at a bank in Annapolis, Maryland, became suspicious and telephoned the CID district office in Baltimore after the individual made several deposits just under the $10,000 reporting threshold. In November 1993 the San Francisco CID district office received a Criminal Referral Form regarding possible structuring of deposits in order to avoid having a CTR filed. On the basis of a subsequent investigation by CID and the U.S. Postal Service, a Post Office employee has been charged with embezzling over $600,000 from the Postal Service over the past several years. Telephone calls from two different banks to the Richmond, Virginia, CID district office reporting that an individual was purchasing cashier’s checks with cash in amounts just under $10,000 resulted in a major narcotics ring being exposed. Eventually, 14 individuals were convicted and over $1.5 million worth of cash, vehicles, and real estate was seized. The Houston, Texas, CID district office received a report from a bank that an individual had deposited more than $12 million in cash during a 4-month period claiming that the money was to be used to open a chain of 13 stores to sell beauty and clothing products. A subsequent investigation by IRS and the Drug Enforcement Administration resulted in the indictment of four individuals for trafficking in cocaine. In New York City, a 2-year investigation by several federal law enforcement agencies resulted in the indictment in September 1994 of 30 grocery store owners accused of food stamp fraud. The case was initiated on the basis of a suspicious CTR. A telephone call from a bank to CID agents in Oklahoma City began a joint investigation that, 2 years later, led to the seizure of over 26 pounds of heroin at that city’s airport. The estimated value of the drugs was $20 million. The suspicious transaction originally reported involved two individuals using cash to purchase cashier’s checks for less than $10,000. Reports of suspicious transactions are a source of intelligence data for CID special agents throughout IRS. As discussed above, districts have used the reports to initiate a number of major investigations. However, the reports are not managed from an agencywide perspective. The extent to which agents in IRS’ 35 CID district offices solicit, process, evaluate, and use the reports is up to the discretion of the district CID chief and varies from one district to another. As a result, IRS cannot be certain the reports are being used to their full potential throughout the agency. There are no IRS procedures or policies as to how suspicious transaction reports are to be managed at the district level. The CID Investigative Handbook offers only the following guidance: “The [district CID chief] should consider designating specific special agents to be responsible for responding to financial institutions that provide information on suspicious currency transactions and for evaluating the information received to determine if a criminal investigation is warranted.” During our review, CID management at the national office surveyed the 35 district CID offices to determine local policies regarding the receipt and evaluation of suspicious transaction reports. The results of the survey indicated that the districts differ significantly as to the level of effort spent evaluating the reports and the amount of emphasis given the initiative by district management. CID officials told us that some districts place much more emphasis on agents establishing a close, working relationship with financial institutions than do other districts. In these districts, for example, one agent is designated to spend much of his or her time personally contacting financial institutions and trade associations to explain the importance of the suspicious transaction reports. We were told that, typically, the institutions in these districts will often call the agent personally even before a report is prepared. According to CID officials, many of the districts maintain a localized computer database of every report that is received. This database is then checked for prior reports when newly received reports are evaluated. Not all of the districts maintain such a database, however, so that IRS does not know how many reports have been received nationwide. Without this information, IRS cannot assess the management of the reports from an agencywide perspective. The CID districts also differ on how individual reports are evaluated. We were told that some districts assign the reports to agents who decide if further investigation is warranted on the basis of the information in the report. Other districts have a policy of researching every report against databases both internal and external to IRS before deciding if an investigation should be opened. The districts vary widely on the role the reports play in the initiation of investigations. From October 1990 to June 1994 CID district offices initiated over 21,000 cases. On an agencywide basis, about 4 percent of the cases were initiated as a result of reports of suspicious transactions. Among individual districts, however, the rate varied from 0 to over 18 percent. CID officials said that they did not know why the rates varied. In our opinion, the variance in the rates is an indication that the reports could be receiving different amounts of emphasis among the districts. Table 3.1 shows the rates for all of the CID district offices. CID officials told us that the majority of CID district offices share suspicious transaction reports with the Examination function in IRS. Under these procedures, if CID does not initiate a criminal investigation on a report, the information will be passed on to tax examiners to use in identifying tax fraud. IRS does not keep records on how useful suspicious transaction reports have been in this regard. Several states have recognized the value of suspicious transaction reports. The type of report these states receive, however, differs among the states so that the information available for state law enforcement agencies to work with varies considerably. Moreover, no state has access to the reports on the same basis as do federal authorities. In July 1993 Treasury Department officials announced the initiation of “Project Gateway,” a program that would allow authorized personnel in every state direct access to the database containing all of the Currency Transaction Reports, including those marked suspicious, at IRS’ Detroit Computing Center. Under the program, authorized personnel in each state would be able to access the data through computer terminals linked to the Center. As of November 1994, 47 states as well as the District of Columbia had entered into agreements with the Department of the Treasury to participate in Project Gateway, and a total of 40 states had already begun operations. Treasury officials told us that agreements with the remaining 3 states were in the final stages of negotiation. Although access to the data is now direct, states are limited as to what CTRs—including those marked suspicious—can be accessed. Under Project Gateway, state analysts must use a specific name to search the database and can access only those reports filed on the individual or business named. Consequently, states can use the data only on a reactive basis—that is, when they already have the name of a suspect. They cannot use the data on a proactive basis, as CID is able to, for targeting individuals for investigation on the basis of suspicious transaction reports having been filed. In response to our survey, 15 states said that they require financial institutions to report suspicious transactions that might involve money laundering. Nine of these states said they use the information to initiate criminal investigations. Five of the 15 states that require suspicious transaction reporting—Colorado, Connecticut, Idaho, Indiana, and Oklahoma—said they require financial institutions to file a copy of any Criminal Referral Form filed with the federal regulatory agencies. Officials in these states told us that the primary reason for receiving copies of the CRFs is to monitor reports of criminal activity occurring within the institutions. They said that they do not use the reports of suspicious customer transactions as a basis for initiating criminal investigations. As mentioned in chapter 2, several states have agreements with Treasury that allow them to receive copies of all CTRs filed within the state on computer tapes from the IRS Detroit Computing Center. Six states—Arizona, California, Florida, Illinois, New York, and Texas—are currently receiving CTRs, including those marked suspicious, on computer tape. The use of suspicious CTRs by these states varies. In Arizona, as previously discussed, the Attorney General’s Office also receives reports of suspicious transactions on CRFs as well as on the state’s own form. State officials told us that all of the reports are entered into the state’s own database and used on both a reactive and proactive basis. Florida is somewhat similar to Arizona in that it requires state-chartered banks to forward copies of CRFs filed to the state banking department. Florida officials said that suspicious CTRs received from IRS are put on a state database and used on a reactive basis. The CRFs, however, are researched and sent to local law enforcement agencies for further investigation at their discretion. New York officials said that they also receive copies of CRFs from state-chartered financial institutions. These are evaluated along with suspicious CTRs and those reports of suspicious transactions that merit further attention are routed to the appropriate law enforcement agency. California does not require financial institutions to send copies of CRFs to the state. However, the state is receiving photocopies of the special reports provided by California financial institutions to IRS’ CID in the western region (see p. p. 23). California enters these reports onto a database along with the suspicious CTRs it receives from the Detroit Computing Center. All of the suspicious transaction reports are used on a both a reactive and proactive basis by the state. Illinois and Texas officials told us that neither state receives copies of CRFs. However, both receive CTRs on magnetic tape from IRS. According to state officials, each state removes those marked suspicious and researches and evaluates them. The resulting leads are sent to law enforcement units in the field for further investigation at their discretion. Other states receive copies of suspicious CTRs from sources other than IRS. Although Georgia, Nebraska, and Utah do not receive CTRs on computer tapes from IRS, each uses suspicious CTRs to some extent to target individuals for further investigation. Each of these states has a law requiring banks to provide the state with copies of CTRs filed with IRS. In addition to receiving copies of all CTRs filed, Georgia requires financial institutions to telefax copies of those CTRs marked suspicious to the state banking department. Although Nebraska does not have the capability to process CTRs filed on magnetic media, the state police receive copies from financial institutions of those filed on paper and review them for those marked suspicious. Similarly, an analyst with the Utah state police scans all CTRs received to identify those marked suspicious. Law enforcement officials from each of these three states told us that the reports are reviewed and evaluated and, where warranted, sent to field units for further investigation. Concurrent with our review, the Department of the Treasury and the financial institution regulatory agencies were in the process of reviewing various aspects of the federal government’s efforts to combat money laundering. Similarly, as discussed above, IRS’ Criminal Investigation Division had initiated a survey of how suspicious transaction reports are used and managed at the district office level. By December 1994, as we were preparing this report, these efforts had resulted in a number of proposals and agreements that could have a substantial impact on suspicious transaction reporting by financial institutions. For the past several years, a group known as the Interagency Bank Fraud Working Group has been attempting to consolidate the six separate CRF forms being used onto a single, standardized form that would be filed with a single recipient. As previously discussed, financial institutions use the forms to report several types of criminal activity, including suspected money laundering and/or attempts to evade currency reporting requirements. Under current procedures, the institution filing the CRF is also responsible for sending copies of the form to a number of regulatory and law enforcement agencies. The purpose for consolidating the forms and designating a single recipient was to ease the reporting burden on the financial institutions and to place responsibility for ensuring correct dissemination of the reports with the government rather than with the reporting institution. In August 1991 the six regulatory agencies signed a Memorandum of Understanding with Treasury’s Financial Crimes Enforcement Network (FinCEN) that authorized FinCEN to design, develop, implement, and maintain a computerized database containing the standardized CRFs. Under the agreement, financial institutions would file CRFs directly with FinCEN. In the interim, the Department of the Treasury, in conjunction with a requirement in 1992 legislation, formed the Bank Secrecy Act Advisory Group composed of 30 individuals from various state and federal agencies as well as the private sector. The Advisory Group, which first met in April 1994, was charged with assessing all of the reporting and recordkeeping requirements of the act as well as other facets of the government’s efforts to combat money laundering. One of the issues discussed during the three meetings held in 1994 was how to facilitate the reporting of suspicious transactions by financial institutions. In December 1994, as we were preparing this report, we were informed by representatives from FinCEN, the Bank Fraud Working Group, and the Bank Secrecy Act Advisory Group that the following agreements had been reached: The “suspicious transaction” block would be removed from the Currency Transaction Report and the form would no longer be used to report suspicious transactions. This action had been taken as part of a general effort to simplify the form by reducing the amount of information to be reported on the form. A standardized version of the Criminal Referral Form was being prepared that could be filed either on paper or electronically. The filing instructions for the form would specify that only one form would be filed, with FinCEN, rather than copies sent to various federal agencies. IRS’ Detroit Computing Center would provide processing services for the new CRF and also develop and maintain a centralized database of the reports. FinCEN would serve as database administrator and assure that the appropriate federal law enforcement agencies have access to the CRF database. CRFs reporting suspected Bank Secrecy Act violations and/or money laundering would be made available to the appropriate district offices of IRS’ Criminal Investigation Division. FinCEN was exploring the feasibility of making available to the states those CRFs reporting money laundering and/or Bank Secrecy Act violations. The reports would be made available to the states on the same basis as state access to the reports required by the Bank Secrecy Act. The database containing the consolidated CRF would be fully operational by September 1995. Also in December 1994 we were informed by officials of IRS’ Criminal Investigation Division that procedures were being prepared to address how suspicious transaction reports were to be managed at the district level. IRS officials said that these procedures would be incorporated into the CID Investigative Handbook and would help ensure consistent treatment and use of the reports. Among the areas to be emphasized were the importance of developing and maintaining a working relationship with financial promptly evaluating the reports received, and performing a minimum level of additional research on the reports. Financial institutions are in a unique position to assist law enforcement at the federal and state levels by reporting suspicious transactions that might indicate money laundering. Reports of suspicious transactions have led to the initiation of a number of major investigations dealing with a wide range of criminal activity. However, the lack of overall direction and control over the reporting of suspicious transactions has led to a situation where reports are filed with different agencies on different forms that vary as to the amount of useful information they contain. Although IRS has successfully used the reports to initiate a number of investigations, the management of—and emphasis given—the information varies among district offices. IRS has no agencywide policies or procedures regarding how best to solicit, process, and utilize the information. Because IRS cannot be certain the information is used and managed consistently, it has no assurance that the information is being used to its full potential throughout the Service. Several states have recognized the value of suspicious transaction reports as a criminal intelligence resource. However, use of the information by these states is limited compared to federal authorities because the type of information available to the states differs. Recent agreements and proposals made by the Department of the Treasury, IRS, and others are an indication that the problems associated with how suspicious transactions are reported are being addressed. We believe that the actions planned, if properly implemented in a timely manner, will do much to provide for the consistent and centralized management of the reports that has been lacking. A draft of this report was provided to the American Bankers Association, FinCEN, and IRS for comment. The Association provided written comments on the report (see app. I) in which it said that it believes financial institutions have an excellent record of cooperating with law enforcement on the reporting of possible violations of law. It added that this cooperation should improve even more with the anticipated changes in suspicious transaction reporting discussed in this report because bankers will be better equipped to focus on reporting potential criminal violations rather than routine transactions. FinCEN provided written comments (see app. II) stating that it found the report to be comprehensive and accurate. IRS also provided written comments on the report (see app. III) and said that it generally agreed with the report’s findings. The comments noted that, although CID should be allowed maximum flexibility in the use of its resources, national guidelines are being developed to ensure consistency in the evaluation and processing of suspicious transaction reports. IRS also noted that changes are being made to a CID management information system that will enable CID to better ensure the proper use of suspicious transaction reports and track its accomplishments in the area. IRS did take exception with a statement in the executive summary of the report that describes the use of the IRS database of CTRs and suspicious CTRs as being reactive. IRS did not believe that the statement recognizes the proactive value of the data in identifying new targets or initiating new investigations. In clarifying these comments with IRS officials, we were informed that, although the word “database” was used, IRS was actually referring to the individual suspicious CTRs and not the computer database on which they are maintained. It was not our intention to portray suspicious CTRs as not having any proactive value. The statement in question refers specifically to the database and not to the individual reports on the database. As noted in chapter 2 (see p. 17), IRS procedures call for staff at the Detroit Computing Center to distribute copies of CTRs that have been marked suspicious to the appropriate CID district offices on a weekly basis. However, as we point out in chapter 3 (see p. 27), the extent to which these suspicious CTRs—as well as suspicious transactions reported on Criminal Referral Forms—are used proactively is up to the discretion of the district CID chief.
Pursuant to a congressional request, GAO provided information on money laundering activities, focusing on: (1) how suspicious transactions are reported; (2) how currency transactions reports are used by law enforcement agencies; and (3) whether the reporting process can be improved. GAO found that: (1) financial institutions file reports of suspicious transactions each year on various forms to various agencies, leading to the initiation of major investigations into various types of criminal activity; (2) there is no way of ensuring that the information is being used to its full potential, since there is no overall control or coordination of the reports; (3) the form that is filed most frequently is filed with the Internal Revenue Service (IRS) and kept on a centralized database, but the form is only useful in providing additional information on an investigation that has already been initiated; (4) other forms used to report suspicious transactions contain more useful information but, since they are filed with six different agencies and are not kept on a centralized database, they cannot be used on a reactive basis; (5) IRS has not developed agencywide procedures for managing suspicious transaction reports, resulting in varied use of the reports among 35 district offices; (6) 9 of the 15 states that receive copies of suspicious transaction reports use the information to initiate criminal investigations; and (7) the Department of the Treasury and IRS have agreed to substantial changes regarding how suspicious transactions are to be reported, how the information is to be used, and how to improve the reports' contributions at both the federal and state levels.
Merchandise trade, the exchange of goods with other nations, is an increasingly important component of the U.S. economy. The U.S. Customs Service collects data on imports and exports that the U.S. Census Bureau uses to produce statistics on U.S. trade. While Customs has numerous import responsibilities, its export functions include guarding against the exportation of illegal goods, such as protected technologies, stolen vehicles, and illegal currency. Customs has broad authority to enforce export laws and regulations. However, it has historically placed more emphasis on imports than on exports. While U.S. import data is recognized as generally reliable, export data is viewed as less accurate. A 1997 Census report notes that the value of U.S. exports has probably been underreported by between 3 and 7 percent but could be understated by as much as 10 percent.Underreporting of exports can significantly affect the accuracy of statistics on the nation’s trade balance. Inaccurate trade statistics can be an impediment in negotiations for bilateral and multilateral trade agreements. Export statistics also are relied on by the government to calculate the gross domestic product (which is used to assess the performance of the U.S. economy) and to determine appropriate promotional programs for expanding exports. Export data is also used to establish controls on sensitive exports. A primary source of export statistics is information that is recorded on a form called the Shipper’s Export Declaration (SED). The SED contains information such as the nature, value, quantity, and destination of the goods to be exported. Generally, exporters or their agents are required by regulation to file the SED for each export transaction having a value over a certain amount, now set at $2,500 for all shipments without a license.Under current regulations, the SED must be delivered to the exporting carrier prior to exportation. Ocean and air carriers, with a bond on file,are permitted to file the complete manifest (a carriers manifest lists all the cargo it is transporting) with Customs within 4 working days after departure. For overland shipments, the SED must be presented to Customs at the time of export. The major sources of error in merchandise trade statistics include missing SEDs and incomplete or inaccurate reporting. Since the 1980s, Customs, Census, and other government agencies have conducted numerous studies, which found serious problems with companies properly completing the document and filing it at the required time and place. For example, Customs completed an audit of selected ocean vessel manifests in 1996 that found about 29 percent of shipments listed on the manifest did not have the required SED. In 1997, Customs conducted an audit of airline manifests that determined 40 percent of SEDs were incorrectly completed. Without a properly completed SED, an export is either not recorded or recorded incorrectly. Currently, about one-third of all export transactions are recorded on paper SEDs. Census collects another third of the export data on a monthly basis directly from exporting businesses through an electronic filing system known as the Automated Export Reporting Program (AERP). Census is terminating AERP in December 1999 because it believes the system is outdated and that AES will provide more accurate data. (Twenty-five percent of all AERP transmissions contain errors that must be corrected.) Census officials stated that AERP has systems limitations related to the amount of data it can accept. They stated that the system would require a complete redesign in methodology and computer technology in order to be able to accept more participants and improve data quality, resulting in a system similar to AES. Census officials also noted that about one-third of all AERP participants submit their data late via AERP. Customs and Census initiated AES in 1991 to improve (1) the collection and reporting of export statistics and (2) the enforcement of export regulations. Initially, the system was designed to replace the manual process of handling paper SEDs with a more efficient and less costly automated process that would increase the accuracy, completeness, and timeliness of SED data. AES is an interactive system that allows exporters or their agents to electronically transmit SED information directly to Customs before a carrier’s departure. In order to improve the quality of export data, data transmitted via AES is subjected to a series of automatic edits. The system in turn sends back to the exporter a message to check the data if it does not fall within statistical parameters developed by Customs and Census. (See app. II for information on how AES works.) According to Customs, AES was also designed to improve the enforcement of export controls by evaluating the risk of export shipments based on certain criteria, such as the country of destination and the type of cargo;compiling exporter histories; allowing for trend analysis; and providing inspectors with detailed commodity data prior to departure. Customs officials believe that the more export information they have, the more focused their efforts to target illegal shipments will be. Finally, in 1994, in response to several initiatives including the Vice President’s National Performance Review, Customs decided that AES should be expanded to provide a centralized database for collecting and processing export documentation required by the U.S. government. Customs planned to work with other U.S. government agencies that have export-related responsibilities to help these agencies meet their export information requirements through AES. (See p. 18 for a discussion of the present status of the single electronic filing center.) Customs installed AES in all U.S. vessel ports in October 1996, and currently it is operational in all ports, including air, rail, and truck transit ports. Customs and Census officials estimate that they spent approximately $12.9 million to develop and implement AES from fiscal year 1992 to 1997. These costs include, among other things, expenses for contractors, travel, and training. According to Customs’ and Census’ figures, both agencies estimate that together they will spend an additional $32.2 million through fiscal year 2002 on AES implementation and maintenance. This new system would require companies to make various changes in how they submit their export data to Customs. Companies that submit their export data via AERP will have to undergo modifications in programming the processing of their export data or return to submitting paper SEDs. Companies that submit their data via paper SEDs but want to participate in AES will have to automate their export processing, and/or purchase AES software, or use the facilities of a port authority or service center to transmit their data. In addition, some segments of the trade community have alleged that AES will require major changes in their current business practices. Because Customs has not strictly enforced the legal requirement that companies submit their SEDs to the carrier prior to departure, many companies have grown accustomed to turning in their SEDs to the carriers late. AES will require that companies file their export data directly to Customs, rather than the carrier, prior to departure. The trade community has varying views on AES. To obtain these views, we conducted two surveys of potential AES users: (1) a nationally representative sample of 400 U.S. ocean freight forwarders and Non-Vessel Operating Common Carriers (NVOCC) and (2) 80 U.S. exporting companies that file paper SEDs. We also interviewed officials from 12 of the largest U.S. sea and air carrier companies and several trade groups representing various segments of the export community. (See app. III for complete survey results.) (We did not independently verify information provided by U.S. companies and trade groups.) We also interviewed Customs officials at 13 sea, air, and land ports. As of June 1997, we completed the surveys and interviews. Although AES has features to improve export data collection and enforcement efforts and to reduce paperwork, the system’s effectiveness is hindered by low participation of the export community. Unless AES participation increases significantly, AES will not enhance the quality of export data or the enforcement of export regulations. In addition, other factors may limit AES’ ability to achieve its objective of enhancing export control enforcement. For example, Customs’ plans to introduce a post-departure filing program may impede the system’s effectiveness as a tool for targeting illegal exports. Finally, AES will not likely serve as a central point for collecting and processing all export documents because other export-related agencies have information needs that they say cannot be fulfilled through AES. Trade community participation in AES is currently very limited, and our work showed most companies do not have immediate plans to participate in AES. As of September 1997, AES participants included 8 exporters, 27 freight forwarders, and 2 sea carriers out of tens of thousands of export-related businesses. Currently, less than 1 percent of all export data is being submitted via AES. Customs expects participation in AES to increase for several reasons. For example, Customs is planning to introduce the Automated Export System Post-Departure Authorized Special System (AES-PASS), which is a program designed to encourage participation in AES by allowing qualified exporters to submit a minimal amount of information prior to export—generally an exporter identification number and a reference number for the shipment. Further, Census is terminating AERP and hopes current users will switch to AES. Customs also anticipates an increase in participation since AES first came online in July 1997 for exports via air, truck, and rail. Despite Customs’ expectations of increased participation, most companies we surveyed do not have immediate plans to use AES. We surveyed 400 randomly selected freight forwarder companies and 80 exporters that file paper SEDs. As shown in figures 1 and 2, only about 36 percent of freight forwarders and about 32 percent of exporters we spoke with currently plan to use AES to submit their SED information; only 50 percent and 42 percent of those companies, respectively, reported that they plan to get on AES within the next 3 years. Of the companies that plan to use AES, only 4 percent of the freight forwarders and 5 percent of the exporters have filed a notice with Customs that they plan to participate in AES or are testing AES. (See fig. 3.) In addition, more than half the companies we surveyed that plan to use AES do not know when they will use it. Most companies did not know how much it will cost their company to implement AES and were not familiar with the AES-PASS program (see fig. 4). Companies and industry groups we spoke with cited certain benefits to getting on AES. The primary incentive mentioned was automating their export system. About 50 percent of the companies we surveyed that plan to use AES said that automation was an incentive to use AES. The other benefit voiced by over 15 percent of respondents was the potential for a single filing point for all export data, referred to as “one-stop filing” by Customs. In addition, those companies we interviewed that are already using AES said that they were doing so to reduce paperwork and personnel and associated administrative costs, take advantage of new automated initiatives, and participate in the development of AES. While the cost of automation and lack of knowledge regarding AES were cited as possible impediments to AES participation by the export community, predeparture filing emerged as a key concern among some segments of this group. Our work indicates that whether or not predeparture filing posed a problem for businesses was related to the type of export or mode of transportation used to export. According to industry groups and several companies’ officials we interviewed, filing information predeparture is inconsistent with their business practices. These officials told us that they often do not know the precise volume and value of their final shipment until just before departure, which makes it difficult to file their paperwork on time. Predeparture filing was a particular concern for exporters of bulk goods or grain commodities. Some of these companies said that they would have to enter estimates in AES prior to departure and that the estimates would then have to be revised later, thereby resulting in rework. One exporter described this as having to do “twice the work.” Regarding carriers, all of the eight airlines and air couriers we spoke with said that meeting the predeparture filing requirement would present a problem for their current business operations; six said that they would not participate in AES due to this requirement. While the air couriers we interviewed said that they generally have the SEDs in hand prior to departure, because of the fast-paced nature of the air courier business they are unable to input SED data into AES before the aircraft departs. Representatives from both these industries told us that they anticipate having to input data into AES as a service to exporters and freight forwarders. Representatives from companies participating in Customs’ evaluation of AES, conducted in two vessel ports in 1996 before AES was expanded to all ports, indicated problems with predeparture filing. Representatives from some companies stated that 80 percent of the time they have all the information needed to complete the SED prior to departure of the vessel. However, for the remaining 20 percent of the time, they have difficulty in obtaining and providing predeparture data. In addition, the evaluation did not include airlines and air couriers, which have significant concerns regarding predeparture filing. It also did not include exporters of bulk commodities that have similar concerns. Export industry groups also have repeatedly expressed concerns about the AES-PASS program, which allows exporters to file most of their export information postdeparture, but still requires companies to file some data prior to departure. Specifically, they have stated that AES-PASS will be costly and burdensome to exporters without providing much benefit to the government. For example, in a March 1997 letter to the Commissioner of Customs, a group of large exporters stated that AES-PASS requires exporters to bear a predeparture reporting burden for all shipments while doing “nothing to improve data collection or compliance.” They stated that AES-PASS would require two submissions for a single shipment—both pre- and post-departure—resulting in additional programming of automated processes. In June 1997, the Trade Resource Group, the private advisory group to Customs on AES, expressed similar concerns in a letter to the Commissioner of Customs. In response to the export community’s continued dissatisfaction with AES requirements (particularly filing information predeparture via AES and/or AES-PASS), in June 1997 the Commissioner of Customs proposed that industry groups enter into formal negotiations with Customs to resolve issues of disagreement regarding AES. Customs has used such negotiations, which rely on an outside moderator, to resolve issues with the trade community in the past. According to Customs officials, this approach will provide a forum for the trade community and Customs to discuss, and potentially resolve, the outstanding issues of concern to both parties. Customs officials told us that they are uncertain as to when the negotiations will begin. AES is designed to provide a “smart targeting system” that would allow inspectors to focus their attention on possible illegal shipments among the thousands of exports leaving the country each day. For example, AES is designed to compile exporter histories, allow for trend analysis, and provide a prioritized list of targets for selective enforcement actions.However, AES’ ability to meet this objective is limited by four major factors. First, AES does not currently link with other law enforcement databases, such as those maintained by the Treasury; the Federal Bureau of Investigation; and the National Insurance Crime Bureau (which maintains a database on stolen vehicles). Customs inspectors told us that AES would be a more effective enforcement tool if it linked with these databases, allowing inspectors to obtain information more quickly on exporters with prior export violations or on stolen vehicles that may be exported. Customs officials told us that they are considering trying to have AES link with other enforcement databases in the future, but that at present they have no definitive plans to do so. They noted that very few enforcement or administrative databases are directly linked to each other because of logistics, funding, and security concerns. Second, AES-PASS will not provide adequate information to target shipments because it only requires a minimal amount of data prior to departure—the exporter identification number; a reference number for the shipment; and a few additional data elements, such as the license code and number if the export requires a license. It will not provide the detailed commodity data that inspectors told us they need for better enforcement. Because so little predeparture information is provided on AES-PASS, some inspectors we interviewed were concerned that AES-PASS would undermine any advantage that AES would have provided, for example, ready access to more detailed commodity data predeparture. Third, AES allows SED information to be transmitted only hours before a shipment’s departure (as with the current paper system), and inspectors told us that in most cases this is not sufficient time for targeting possible illegal shipments. While some inspectors told us that they would need SED information about 4 hours in advance of the carrier’s departure in order to target shipments, others said they would need 24 hours. (We did not evaluate the feasibility of companies being able to file data within these time frames.) Finally, since participation in AES is voluntary, an illegal exporter is unlikely to use the system for filing export data. Inspectors at several ports told us that there is no incentive for exporters to get on AES, and others stated that they believe AES would need to be mandatory to be effective. According to both Census and Customs, AES has the potential to provide exporters with “one-stop shopping” by creating a single electronic filing center for all U.S. export data. AES was not designed to replace any agency’s authority to regulate exports. The system was designed to serve as a source of export data for agencies with export requirements and to reduce redundancies in filing and paperwork associated with various export control requirements. However, AES is unlikely to achieve those objectives because most agencies’ export requirements cannot be fully satisfied through AES. For example, 8 of the 13 agencies identified by Customs as having regulatory authority over exports are not using AES to fulfill their export licensing or permit requirements because of existing regulations that require them to retain their own licensing procedures, including collecting information provided by the exporter. As a result, exporters will have to continue to apply to multiple agencies for approval to export certain commodities. (About 30 percent of all U.S. freight forwarders export goods that require export licenses.) For example, exporters seeking to ship products that have both civilian and military applications would still have to apply directly to the Commerce Department’s Bureau of Export Administration for approval. In addition, exporters of chemicals and pharmaceutical drugs are required to apply directly to the Drug Enforcement Administration (DEA) 15 days prior to exportation in order for DEA to conduct an investigation. Although AES is designed to eliminate the paper SED, it will not substantially reduce or eliminate agency paperwork or the electronic filing associated with the issuance of export licenses, certificates, or permits. For example, the Department of Agriculture’s Food Safety and Inspection Service issues inspection certificates for agricultural exports that must accompany the shipment abroad, precluding the possibility of electronic filing through AES. In addition, DEA officials noted that they are governed by international conventions, to which the United States is a signatory, that mandate their use of internationally standardized paper licenses for exports of certain chemicals and pharmaceutical products. According to Customs officials, there are several obstacles that prevent them from quickly achieving this goal. For example, many agencies lack sufficient staff or budgetary resources, have outdated regulations that may need to be changed, and/or are reluctant to share data with other agencies even though they may collect the same data. Customs recognizes that these obstacles will need to be overcome in order to have AES fully interface with other export-related agencies. Despite these limitations, officials at three agencies with export reporting (rather than licensing) requirements—Census, the Maritime Administration, and the Energy Information Administration—indicate that AES has the potential to satisfy their needs. Specifically, it is expected to eliminate their paperwork processing and help them fulfill their reporting requirements. Several other agencies, including the Bureau of Export Administration, the Office of Foreign Assets Control, and the Office of Arms Control and Nonproliferation indicated that as designed, AES would provide a more efficient means to track and monitor cargo shipments against approved licenses. Currently, AES validates cargo against State Department and Bureau of Export Administration licenses. Since 1994, Customs has tried to develop an automated interface with other government agencies to maximize opportunities to share export information through AES and streamline data collection. After Customs determines the feasibility of working with a particular agency, Customs seeks to (1) reach commitments to collaborate on their use of AES, (2) define and incorporate the informational requirements of these prospective users, and (3) conclude the process with a memorandum of understanding (MOU) that guides the implementation of the final interface. Currently, only Census has signed an MOU with Customs legally stipulating each agency’s responsibility for collecting, transmitting, and securing data captured in AES, in addition to cost-sharing arrangements. Customs has obtained written commitments from five other agencies to collaborate on AES. Some of the areas being discussed are data to be included in AES, information sharing and access, and the development of compatible information systems. Of these six agencies, Customs has completed and incorporated into AES the user requirements of Census, the Bureau of Export Administration, and the Office of Defense Trade Controls, specifying each agency’s requirements for collecting and processing data. One reason that progress has been slow is that Customs has assigned only one full-time person to develop interfaces with other government agencies. Other agencies have not committed to use AES for a variety of reasons. For example, according to officials at the Bureau of Alcohol, Tobacco, and Firearms, the agency lacks sufficient resources to develop a compatible automated system, and DEA has regulations that preclude its use of AES. Officials at the Environmental Protection Agency told us that they cannot use AES, as they do not currently have an agreement with Census to access SED data. Furthermore, although AES does include Nuclear Regulatory Commission license codes in order to validate licensed shipments, agency officials indicated that the agency already has an automated information system that meets its needs. (See fig. 5.) It has been well documented that successful information systems require the continuing involvement and commitment of senior executives. In this case, where the concept of AES entails integrating the export reporting functions of 14 separate federal agencies, extensive high-level coordination and exchange are not presently in place to explicitly define what export reporting and/or licensing requirements can or cannot be accommodated by AES and what distinct licensing and/or reporting requirements must remain. The quality of export data has been a long-term problem. AES represents a major initiative to improve the quality of export data that is used to negotiate trade agreements and enforce export laws and regulations. While the trade community believes export data needs to be automated, the reluctance of U.S. companies to participate and the uncertainty that other agencies will be able to interface with AES raise serious questions about the system’s viability. In addition, Customs’ planned use of AES as an enforcement tool is limited because AES is not currently linked to other law enforcement databases, and AES-PASS allows approved exporters to file almost all of their export data post-departure. We question whether AES will be able to meet its objectives without greater involvement of top management in resolving the operational and implementation problems we have identified. We believe the Commissioner of the U.S. Customs Service and the Director of the U.S. Census Bureau need to devote sustained management attention to AES. Specifically, these officials need to expeditiously assess the extent to which the export community’s concerns can be addressed, the likely amount of participation in AES, the likely usefulness of AES in enhancing enforcement, and the extent to which other agencies will be able to use AES. In making this assessment, attention needs to be given to determining whether predeparture filing of export data is critical to improved export statistics and enforcement of U.S. laws and regulations and, if so, how far in advance inspectors need the information for AES to be an effective enforcement tool; a link between AES and the databases of law enforcement agencies can be allowing some exporters to file SEDs after departure would undermine the objective of achieving improved export data and/or render AES ineffective as an enforcement tool; and the requirements of other agencies can be modified or otherwise accommodated to permit their use of AES. Once this assessment is done, we believe the agencies need to consider how or whether to proceed with implementing AES. If these problems are not resolved in the near future, we are concerned that Customs will continue to invest significant monies in a system that is likely to be of limited benefit. We recommend that the Secretaries of the Treasury and of Commerce direct the Commissioner of the U.S. Customs Service and the Director of the Bureau of the Census to delineate the concrete actions needed to improve AES’ potential, and, after doing so, assess the costs and benefits of continuing to implement AES. A draft of this report was provided to Customs and Census. While Customs agreed that AES should interface with other enforcement and export data bases and that AES-PASS should be reevaluated in light of its potential adverse effect on enforcement efforts, both agencies said that they believed our assessment of the level of participation in AES was premature. They said that early in the system’s development, they decided to use a phased implementation approach. They also noted that participation in AES has increased since it was expanded to all modes of transportation in July 1997 and they expect participation to be greater in the future. However, they did not address our recommendation or specify the actions they plan to take to overcome obstacles to AES’ success. We disagree with Census and Customs view that our assessment of AES is premature. We believe our work provides important insights into issues that will affect AES’ success and that Customs and Census need to develop a strategy to address these issues. On the critical issue of participation, our survey revealed strong resistance among the export community that has serious implications for future participation. Unless AES achieves high participation and provides an interface among agencies with enforcement and export responsibilities, it is difficult to envision how the system can meet its objectives. We, therefore, continue to believe that Customs and Census should identify the specific actions needed to improve AES’ potential, and, after doing so, assess the cost and benefits of continuing to implement AES. Census also expressed concern that the results from our surveys and interviews were not presented in such a way that the reader can determine the significance of the responses and that our work does not reflect the views of the entire export trade community. We used a variety of techniques to obtain the export community’s views regarding AES, including a nationally representative sample survey of 400 ocean freight forwarders. Our survey was necessarily limited to ocean freight forwarders because AES had not been extended to other modes of transportation. We believe that the results from this survey when combined with those from our survey of the top 80 exporters that file paper SEDs, as well as in-depth interviews with 30 exporting companies and 14 of the top AERP filers, provides a reasonable basis on which to assess the views of a broad cross section of the export community regarding AES. We did not suggest that our assessment was based on a survey of the entire export community. Moreover, Census did not offer any studies that produced results that were inconsistent with what we found. (See app. IV for specific details on our scope and methodology.) As agreed with your office, unless you publically announce the contents earlier, we plan no further distribution of this report until 2 days after its issue date. At that time, we will provide copies of the report to appropriate congressional committees and the Commissioner of the U.S. Customs Service and the Director of the Bureau of the Census. We will also make copies available to other interested parties on request. This review was done under the direction of JayEtta Z. Hecker, Associate Director. If you or your staff have any questions concerning this report, please contact Ms. Hecker at (202) 512-8984. Major contributors to this report are listed in appendix VII. We obtained information from officials in six countries—Australia, Canada, Japan, South Korea, Mexico, and the United Kingdom—on their export procedures and systems for collecting export data. (The information we collected from these countries was self-reported by these countries—we did not independently verify any information we obtained.) Almost all of these countries reporting having implemented automated systems for collecting export data, and most countries reported that nearly 100 percent of their export data is collected via their automated systems. Most countries’ automated systems are voluntary and were automated within the last 5 years. These countries require that at least some export information be filed prior to departure. Further, almost all of the countries use their automated system in some way as a targeting tool to help with the control and enforcement of the countries’ export laws. Five of the six countries from which we obtained information have implemented an automated system to collect their export data (Australia, Japan, South Korea, Mexico, and the United Kingdom); Canada is currently piloting an automated system to collect export data (see table I.1). With the exception of Mexico, all countries’ automated systems are voluntary (including Canada’s new system). As an alternative to electronic filing, exporters in Australia, Canada, Japan, and the United Kingdom can file paper export declarations. Most of these countries’ automated systems were implemented in the 1990s, although Australia and Japan have had at least a partially automated system in place since the mid- to late-1980s. Most countries require that exporters or their agents file at least some information prior to departure. Japan requires that all export data be submitted prior to departure. Australia, Canada, South Korea, Mexico, and the United Kingdom, however, allow approved exporters to wait to file some of their information after departure. Australia requires that approved exporters file an export report as soon as the information is available; Canada requires that exporters file a report up to 5 days after the month of departure; South Korea requires that a report be filed within a day of departure; Mexico generally requires that a report be filed within a week of departure; and the United Kingdom requires exporters to file a completed report within 14 days of departure. All of the countries, with the exception of the United Kingdom, use their automated system to help control exports and target goods for inspection (Canada plans to use its system for this purpose). Some countries, such as South Korea and Japan, use pre-set criteria for targeting goods for inspection. Mexico’s automated system, on the other hand, randomly selects shipments for inspection. Predeparture, with optional post-departure filing for approved exporters (this option generally open only to certain bulk and agricultural shippers, but requires some information be filed predeparture) Almost 100 percent (export community participation level) Predeparture, with optional post-departure reporting for bulk cargo and perishable goods (requires some information be provided prior to departure) Predeparture, with optional post-departure filing program (requires that invoice information still be presented upon export) Predeparture, with optional post-departure filing for approved exporters (this option is open to both paper and electronic filers but requires some information be filed predeparture) Deadlines set locally at Customs ports (1996) $130 billion (1996) $96 billion (1996) $113 billion(Table notes on next page) (Australia) Reference number; type of export, establishment code; owner’s name; owner’s phone number; consignee’s name; consignee’s city; country of destination; port of loading; port of discharge; invoice currency; total free-on-board (FOB) value; intended date of export; number of F.C.L. containers, if applicable; mode of export; ship/aircraft identity; number of packages; commodity classification code; origin code; goods description; net quantity; gross weight; container type; coal, thermal use indicator; assay details; container number and seal number, if applicable; permit details, including permit number and encryption code; information on whether goods are subject to certain export concession arrangements, FOB value; and signature. (Canada) Information required for paper filing generally includes exporter name and address; consignee name and address; exporter’s business number; country of final destination; province and country of origin of goods; export permit number; description of goods; harmonized tariff system code of goods; quantity and unit of measure; value; signature of responsible party; mode of transportation; and reason for export. Goods exported by sea can be reported in a predeparture interim report that must include the following information: exporter name, address, and business number; consignee name and address; country/province of origin of goods, country of final destination; number of packages; description of goods; and, if containerized, container number. (Japan) User code; exporter code, name, and address; trading pattern code; airway bill or bill of lading number; description, number, quantity, value, and statistical code numbers of goods; destination and its code number; loading and storage place code; airline code, or name and nationality of vessel; and scheduled departure date. (South Korea) Forty-four items, including declarant; manufacturer; exporter name; buyer; value/quantity of goods; destination; consignee; letter of credit number; and weight. (Mexico) Sixty-two data items, including information on the exporting company and its location; goods’ quantity, value, and classification; transport company name and location; and data of the foreign trade transaction. Invoice information that must be provided upon export in paper form must contain the name of the exporting company; taxpayer identification number; date and number of the invoice; a general description, quantity, and value of the goods; information on the vehicle transporting the merchandise; number of the consolidated entry; name and signature; and number and license of the Customs broker. (United Kingdom) Filing requirements depend on the procedure being used, but a completed declaration via the automated system generally includes consignor/exporter; number of items declared; total packages; reference number; name and address of person or company making the declaration; code for country of ultimate destination; information on shipment container, if appropriate; identify and nationality of active means of transport crossing the border; mode of transport at the border; place of loading; location of goods; packages and description of goods, including marks and numbers, container numbers, and number and kind of goods; item number; tariff classification commodity code; net weight; any additional information, documents produced, certificates and authorizations; and value of goods. Participants in the U.K. automated system post-departure filing program generally must file the following information predeparture: name and address of exporter or agent; brief commercial description of goods; number and kind of packages/goods; marks and numbers on packages; net weight; and any additional information, documents produced, certificates, and authorizations. As currently implemented, the Automated Export System (AES) allows exporters or their agents to electronically transmit Shipper’s Export Declaration (SED) information directly to Customs. The process begins when either the exporter or agent transmits commodity data directly into AES or when the carrier transmits a receipt of goods message. (AES participants can transmit their commodity data either by developing their own software, using software from various AES-certified software vendors, via the Internet, or using facilities of a port authority or service center.) If the carrier transmits data via AES before the exporter, an “I owe you” (IOU) is established noting that the exporter has not yet transmitted commodity data. The commodity data passes through built-in edits that check for accurate and complete information and match it against U.S. agency requirement files. The system also matches commodity data sent by the exporter with transportation data (such as the name and flag of the vessel) sent by the carrier. The carrier is free to load the cargo unless it receives a “hold” message. AES will reject the shipment if core information, such as the commodity code, country name, or exporter name is invalid or incomplete. These “fatal errors” must be corrected before merchandise is exported. (See fig. II.1.) AES will reject the shipment if core information is invalid or incomplete. These errors are "fatal." Fatal errors must be corrected before merchandise is exported. AES will also generate warning messages that will not reject the shipment, but warnings must be corrected within 4 days after departure. Provided in the following section are questions and responses for our surveys of 400 U.S. freight forwarders and Non-Vessel Operating Common Carriers (NVOCC) and 80 U.S. exporting companies that file paper SEDs. All results are reported as percentages, and for each question, we present the number of respondents answering the question. (Certain questions were only to be answered by a subset of respondents, that is, those possessing a certain characteristic or giving a particular answer to a previous question.) For questions requesting a numerical answer (such as the number of employees) we present descriptive statistics, such as the median and the range of responses. In addition, for several questions where we report in the letter on the subset of respondents who plan to use AES, we provide results both for this subset group and for all respondents. Hello, this is , calling from the U.S. General Accounting Office. Senator Orrin Hatch, Chairman of the Senate Judiciary Committee, has asked us to obtain the views of the export community regarding the Customs Service's new Automated Export System, AES, and to collect information on company export practices which may be affected by AES. Your company has been chosen as part of a nationally representative sample of freight forwarders and NVOCCs for this study. The survey should take about 5 to 10 minutes of your time. We will need to speak with the individual in your company most familiar with your company's current and future export documentation procedures for the survey. We need some basic company background information in order to describe the kinds of companies we talked with in our survey report for the Congress. So, before we discuss AES, I'd like to ask some questions about the levels and kinds of export activities your company engages in and the approximate size of your company. 1. At any time from the beginning of 1996 through the present, did your company provide freight forwarding services to companies exporting products from the U.S.? 91.2 8.8 2. At any time from the beginning of 1996 through the present, did your company also provide Non-Vessel Operating Common Carrier (NVOCC) services to companies exporting products from the United States? All respondents who answered "No" to Q.1 must answer "Yes" to Q.2 to proceed with the survey. Those who answered "No" to both questions 1 and 2 were routed out of the survey and do not appear anywhere in this report.) 3. About what percentage of your company's export business, if any, would you say currently involves goods exported to Canada? Your best estimate will suffice. MEDIAN RANGE (Minimum and Maximum) INTERQUARTILE RANGE0 0 - 100 1 4. In addition to exporting products by sea, does your company also provide services to clients exporting by air? 5. I'd also like some information about the size of your company. About how many employees (full-time equivalents) does your company have? Your best estimate will suffice. MEDIAN RANGE INTERQUARTILE RANGE SUM 6 1 - 60,000 12 86,879 6. And approximately what would you say your company's gross revenues (or sales) were for 1996? Your best estimate will suffice. MEDIAN RANGE INTERQUARTILE RANGE SUM $1.25 million 7,000 - 1.8 billion 4.5 million 4.64 billion 7. Don't know 0.9 8. About how many companies (clients) did your company provide export services for during 1996? MEDIAN RANGE INTERQUARTILE RANGE SUM 1 - 50,000 130 132,033 Since the beginning of 1996, did your company export goods that required an export license? 2Ag Aside from export license and Census Bureau and Customs Service paperwork, did any of your exported goods involve reporting requirements to any other federal agencies since the beginning of 1996? (GO TO Q. Ins) 3Ag Which agencies had reporting requirements? Other (List below) During the last 3 years, that is, since 1994, were any of your export shipments inspected by the Customs Service? Im1 Does your company also act as import broker for companies importing products into the United States? (GO TO Q.9) Im2 Does your company use the Customs Service's Automated Broker Interface (ABI) system to submit your import data? 84.7 Yes No 13.6 Don't know 1.7 AUTOMATED EXPORT SYSTEM (AES) 9. The Customs Service is now implementing the Automated Export System (AES) to collect and process data for all parties involved in export trade. Have you ever heard of this system? (GO TO Q. 22) All respondents saying that they had not heard of AES and those expressing a desire for more information about AES were given Customs contact information. 11. Has anyone from the Customs Service or any other federal agency contacted your company regarding AES? 12. I'd like to know your company's plans, if any, regarding AES. Does your company plan to use AES to submit your required export data? 47.6Yes No 18.7 Company hasn't decided 24.8 Don't know 8.9 (GO TO Q. 21) (GO TO Q. 14) 13. How would you describe the status of your company's involvement with AES? Studying AES Planning to file a letter of intent with Customs Have filed a letter of intent with Customs Currently testing AES Other (Specify) Can't say/Don't know 53.8 14.5 3.4 0.9 19.7 7.7 14. What incentives, if any, do you see for going on AES? (DO NOT read list. Click all that respondent volunteers) None One-stop filing Cost savings for company Convenience of automation Better trade statistics Other (Specify) 21.0 15.5 9.0 45.5 7.5 40.0 (GO TO Q. 15) Those That Plan to Use AES (GO TO Q. 15) One-stop filing Cost savings for company Convenience of automation Better trade statistics Other (Specify) 19.7 9.4 45.3 11.1 46.2 15. I'd like your views about incentives for using AES mentioned by others. Do you view as an incentive for your company to go on AES or not? (Each item was asked of respondents NOT volunteering the item in Q. 14. Results displayed include those who volunteered the item in question 14.) Volunteered Yes No Don't know B. Cost savings for company Volunteered Yes No Don't know N = 200 Volunteered Yes No Don't know Volunteered Yes No Don't know 16. About how much would you estimate it would cost your company to implement AES, if you chose to do so? (Asked of those planning to use AES or whose companies had not yet decided whether to use AES.) N= 117 17. About when do you plan to start using AES? (Asked of those planning to use AES.) Date Given: 7/97 - 1/2000 49.6 18. And about how long do you think it would take for your company to implement AES? (Asked of those whose companies had not yet decided whether to use AES.) MONTHS YEARS BY (DATE) 1 - 6 1 - 5 9/97 - Sometime in 1998 Automated Export System Post-Departure Authorized Special System (AES-PASS) 19. Are you familiar with AES-PASS? (Asked of those planning to use AES or whose companies had not yet decided whether to use AES.) Those That Plan to Use AES (GO TO Q.22) (GO TO Q.22) 20. Is your company likely to apply for AES-PASS status? (Asked of those who were familiar with AES-PASS and who plan to use AES or whose companies had not yet decided whether to use AES.) (GO TO Q.22) (GO TO Q.22) 21. Why does your company not plan to use AES?(DO NOT read list. Click all that respondent volunteers) (Asked of those who say they will not use AES.) Lack of knowledge about AES (GO TO Q. 22) ______________________________________________ (Reasons spontaneously volunteered) Predeparture filing requirement Cost of automation Personnel cost Company hardware or software incompatibility with AES Concerns about the amount of information required by AES Concerns about how the information will be used by Customs Concerns about privacy protection of information Other (Specify) N = 46 I'd like your views about some concerns mentioned by others in using AES. (Each item was asked of respondents who did not mention lack of knowledge above and who did NOT volunteer the item in Q. 21. Results displayed include those who volunteered the item in question 21 above.) Are you concerned or not about the predeparture SED filing requirement of AES? Volunteered Yes No Don't know Are you concerned or not about the cost of automation necessary for your company to get on AES? Volunteered Yes No Don't know Are you concerned or not about the amount of information required by AES? Volunteered Yes No Don't know Are you concerned or not about how the information will be used by Customs? 0 15.4 Are you concerned or not about privacy protection of information? Volunteered Yes No Don't know Shippers' Export Declaration (SED) Preparation and Filing Next, we'd like to know how your company currently prepares some of its export-related paperwork. There are a number of ways a company may prepare and file the SED with the Census Bureau. I'd like to ask you about the methods you used during the past year. 22. First, about what percentage of your export shipments required the filing of an SED during 1996? Your best estimate will suffice. 90 0 - 100 28 23. Did your company submit all of the SEDs for those shipments to Customs or did someone else also submit SEDs for those shipments? We submitted all SEDs Someone else submitted all or some of the SEDs 13.8 86.2 (GO TO Q. 25) N = 326 24. About what percentage of the SEDs for your shipments did YOUR company submit to Customs in 1996? (Results displayed include respondents who answered "We submit all SEDs" to Q.23, scored as submitting 100 percent.) 25. How did your company submit its SEDs during 1996: Using paper SEDs, the Automated Export Reporting Program (AERP), AES, or an Internet-based company linked to AES? (click all that apply) (Results displayed include only respondents who submit SEDs.) For AERP Users Only: 26. The Census Bureau plans on phasing out the AERP system by the end of 1999. How does your company plan to submit its SEDs once the AERP system is no longer available? (Click all that apply) Company will use AES Company will use an Internet service No plans yet Submit paper SEDs Have the customer submit SEDs Other (specify) N = 10 For Paper SED Filers Only: 27. Does your company use a computer to manage any or all of its export-related record keeping? (Results displayed exclude paper filers who also file SEDs electronically.) And finally, we'd like to know about the filing of your export paperwork and the timing of your export shipments. 28. How difficult, if at all, was it for your company last year (1996) to file its paper SEDs with the carrier prior to departure of the goods? (Read each response option and click one) Of very great difficulty Of great difficulty Of moderate difficulty Of some difficulty Of little or no difficulty 4.6 6.2 11.4 16.6 61.2 29. During the last year (1996), did your company deliver ANY SEDs to the carrier following departure of the goods? Do not include any submitted through AERP. N = 307 30. About how many of your SEDs were delivered after departure of the goods? That concludes our interview. Thank you for your time and your cooperation. If there is any other aspect of AES you'd like to comment on, please feel free to do so now. This is , of the U.S. General Accounting Office. Senator Orrin Hatch, Chairman of the Senate Judiciary Committee, has asked us to obtain the views of the export community regarding the Customs Service's new Automated Export System, AES, and to collect information on company export practices which may be affected by AES. Your company has been chosen as part of a study of exporters who have filed export documentation with the Census Bureau in paper form. The survey should take about 5 to 10 minutes of your time. We need to speak with the individual in your company most familiar with your company's current and future export documentation procedures for the survey. We need some basic company background information in order to describe the kinds of companies we talked with in our survey report for the Congress. So, before we discuss AES, I'd like to ask some questions about the kinds of export activities your company engages in and the approximate size of your company. 1. First, we'd like to know how your company exports products. Does your company export products by air? 2. Does your company export products by sea? 77.8 22.2 3. Does your company export products by means other than air or sea, such as truck or rail? 5. Next, I'd also like some information about the size of your company. About how many employees (full-time equivalents) does your company have? Your best estimate will suffice. MEDIAN RANGE INTERQUARTILE RANGE SUM 2,200 6 - 647,000 7,205 1,610,598 6. And approximately what would you say your company's gross revenues (or sales) were for 1996? Your best estimate will suffice. MEDIAN RANGE INTERQUARTILE RANGE SUM $1.65 billion 100,000 - 164 billion 7.6 billion 424 billion 7. About what percentage of your company's total business is involved in the EXPORT trade? 51 1 - 100 40 Since the beginning of 1996, did your company export goods that required an export license? 2Ag Aside from export license and Census Bureau and Customs Service paperwork, did any of your exported goods involve reporting requirements to any other federal agencies since the beginning of 1996? (GO TO Q. Ins) 3Ag Which agencies had reporting requirements? Agriculture Department State Department Commerce Department Nuclear Regulatory Commission 0 Other (List below) During the last 3 years, that is, since 1994, were any of your export shipments inspected by the Customs Service? N = 62 Im1 Does your company also import products into the United States? (GO TO Q.9) Im2 Does your company use the Customs Service's Automated Broker Interface (ABI) system to submit import data? 31.6 Yes No 42.1 Don't know 26.3 9. The Customs Service is now implementing the Automated Export System (AES) to collect and process data for all parties involved in export trade. Have you ever heard of this system? (GO TO Q. 22) All respondents saying that they had not heard of AES and those expressing a desire for more information about AES were given Customs contact information. 11. Has anyone from the Customs Service or any other federal agency contacted your company regarding AES? N = 49 12. I'd like to know your company's plans, if any, regarding AES. Does your company plan to use AES to submit your required export data? 40.8 Yes No 22.4 Company hasn't decided 20.4 16.3 Don't know (GO TO Q. 21) (GO TO Q. 14) 13. How would you describe the status of your company's involvement with AES? N = 20 14. What incentives, if any, do you see for going on AES? (DO NOT read list. Click all that respondent volunteers) None One-stop filing Cost savings for company Convenience of automation Better trade statistics Other (Specify) 23.7 18.4 7.9 55.3 0 36.8 (GO TO Q. 15) Those That Plan to Use AES None One-stop filing Cost savings for company Convenience of automation Better trade statistics Other (Specify) 15.0 15.0 5.0 55.0 0 40.0 (GO TO Q. 15) 15. I'd like your views about incentives for using AES mentioned by others. Do you view as an incentive for your company to on go on AES or not? (Each item was asked of respondents NOT volunteering the item in Q. 14. Results displayed include those who volunteered the item in question 14.) Volunteered Yes No Don't know N = 38 B. Cost savings for company Volunteered Yes No Don't know Volunteered Yes No Don't know Volunteered Yes No Don't know 16. About how much would you estimate it would cost your company to implement AES, if you chose to do so? (Asked of those planning to use AES or whose companies had not yet decided whether to use AES.) Don't know 70.0 17. About when do you plan to start using AES? (Asked of those planning to use AES.) Date Given: 7/97 - 1/99 18. And about how long do you think it would take for your company to implement AES? (Asked of those whose companies had not yet decided whether to use AES.) MONTHS YEARS BY (DATE) 1 - 11 2 --- 19. Are you familiar with AES-PASS? (Asked of those planning to use AES or whose companies had not yet decided whether to use AES.) Those That Plan to Use AES 40.0 (GO TO Q.22) (GO TO Q.22) 20. Is your company likely to apply for AES-PASS status? (Asked of those who were familiar with AES-PASS and who plan to use AES or whose companies had not yet decided whether to use AES.) 100.0 (GO TO Q.22) 0 (GO TO Q.22) 21. Why does your company not plan to use AES?(DO NOT read list. Click all that respondent volunteers) (Asked of those who say they will not use AES.) Lack of knowledge about AES (GO TO Q. 22) ______________________________________________ (Reasons spontaneously volunteered) Predeparture filing requirement Cost of automation Personnel cost Company hardware or software incompatibility with AES Concerns about the amount of information required by AES Concerns about how the information will be used by Customs Concerns about privacy protection of information Other (Specify) 36.4 0 0 0 0 0 0 36.4 I'd like your views about some concerns mentioned by others in using AES. (Each item was asked of respondents who did not mention lack of knowledge above and who did NOT volunteer the item in Q. 21. Results displayed include those who volunteered the item in question 21 above.) Are you concerned or not about the predeparture SED filing requirement of AES? Volunteered Yes No Don't know Are you concerned or not about the cost of automation necessary for your company to get on AES? Volunteered Yes No Don't know Are you concerned or not about the amount of information required by AES? Volunteered Yes No Don't know Are you concerned or not about how the information will be used by Customs? Volunteered Yes No Don't know Are you concerned or not about privacy protection of information? 0 57.1 Next, we'd like to know how your company currently prepares some of its export-related paperwork. There are a number of ways a company may prepare and file the Shippers Export Declaration form with the Census Bureau. I'd like to ask you about the methods you used during the past year. 22. First, about what percentage of your export shipments required the filing of an SED during 1996? Your best estimate will suffice. 95 50 - 100 15 23. Did your company submit all of the SEDs for those shipments to Customs or did someone else also submit SEDs for those shipments? We submitted all SEDs Someone else submitted all or some of the SEDs (GO TO Q. 25) 71.4 24. About what percentage of the SEDs for your shipments did YOUR company submit to Customs in 1996? (Results displayed include respondents who answered "We submitted SEDs" to Q.23, scored as submitting 100 percent.) 19.5 0 - 100 100 25. How did your company submit its SEDs during 1996: Using paper SEDs, AERP, AES, or an Internet-based company linked to AES? (click all that apply) (Results displayed include only respondents who submit SEDs.) Paper AERP AES Internet Unknown 92.1 5.3 0 0 5.3 For AERP Users Only: 26. The Census Bureau plans on phasing out the AERP system by the end of 1999. How does your company plan to submit its SEDs once the AERP system is no longer available? (Click all that apply) Company will use AES Company will use an Internet service No plans yet Submit paper SEDs Have an agent or the customer submit SEDs 0 Other (specify) For Paper SED Filers Only: 27. Does your company use a computer to manage any or all of its export-related record keeping? (Results displayed exclude paper filers who also file electronically.) 94.1 And finally, we'd like to know about the filing of your export paperwork and the timing of your export shipments. 28. How difficult, if at all, was it for your company last year (1996) to file its paper SEDs with the carrier prior to departure of the goods? (Read each response option and click one) Of very great difficulty Of great difficulty Of moderate difficulty Of some difficulty Of little or no difficulty 11.8 2.9 17.6 5.9 61.8 29. During the last year (1996), did your company deliver ANY SEDs to the carrier following departure of the goods? Do not include any submitted through AERP. 30. N = 35 That concludes our interview. Thank you for your time and your cooperation. If there is any other aspect of AES you'd like to comment on, please feel free to do so now. To determine whether AES is likely to achieve its objectives of improving export data, enhancing enforcement efforts, and streamlining export data collection, we interviewed Customs and Census headquarters officials and representatives of 12 government agencies with export-related responsibilities. We also visited 13 Customs ports, including air, sea, and land border ports, where we observed export processing and enforcement operations and where we interviewed numerous supervisory and line inspectors involved in these operations. We conducted interviews with over 30 potential users of AES, including 12 ocean and air carriers and all AES participants as of April 1997. We also interviewed over 10 of the top 16 AERP users in terms of value and volume of AERP filers. We also met with several trade groups representing various segments of the export community. In addition, we analyzed Customs’ and Census’ AES planning documents and Customs’ strategic plans regarding its process for checking goods to be exported. We also reviewed data provided by both Customs and Census regarding their actual and projected costs for AES development. We did not independently verify the validity of their cost estimates. As part of our effort to determine the trade community’s plans for using AES, we conducted two surveys of potential AES users—U.S. ocean freight forwarders and exporters. A detailed summary of our methodology for these two surveys follows. The freight forwarder study population consisted of active licensed ocean freight forwarders and NVOCCs listed in the Federal Maritime Commission’s December 1996 Regulated Persons Index. The 1,939 freight forwarder headquarters and 2,341 NVOCC listings were merged and duplicates were eliminated, resulting in a total population of 3,209. A simple random sample of 400 cases was selected from the combined list. Twelve cases, although listed in the index, had not or were not currently providing freight forwarding or NVOCC services and therefore were considered ineligible for the survey. An additional six companies were found to be subsidiaries of others on our list. In these instances, a single respondent was chosen to respond on behalf of both companies. We sent certified letters to companies we were unable to contact by phone. We received confirmation from the Postal Service that three of those cases were not located at the listed address, nor did the Postal Service have forwarding address information for those cases. The bonds and tariffs of two cases were cancelled by the Federal Maritime Commission. The population was adjusted to reflect these inactive cases. Applying the same adjustment to the sample resulted in a final sample size of 376. Telephone interviews were completed with 331 freight forwarders and NVOCCs, for a response rate of 88 percent. Forty-five sample members either refused to participate (30), could not be scheduled for an interview during the study’s time frame (4), or could not be contacted to confirm eligibility (11). Because this study is based on a probability sample, our estimates involve some statistical uncertainty. Percentages and other estimates contained in the report are the midpoints of the 95-percent confidence intervals for the value being estimated. The results present intervals for items quoted in the letter. To minimize nonsampling sources of error, such as question wording or sequencing effects and interviewer differences, the survey was pretested with 16 active freight forwarders and NVOCCs following intensive interviewer training and practice. The item nonresponse rate (the rate of interviewers not recording an answer to a question that should have been answered) for reported items ranged between 0 and 2 percent for questions asked of all respondents and between 0 and 5 percent for questions asked only of those not planning to use AES. We examined the Federal Maritime Commission’s database to determine whether systematic differences held between our sample and the parent population as well as whether systematic differences distinguished nonrespondents from our respondents. We examined each group in terms of number of branch offices, as an approximate measure of size, the mixture of cases from the freight forwarder or NVOCC listings, and the region of the country in which they operated. All nonrespondents are listed as having single offices, and about 6 percent of respondents are listed as having two or more offices. Respondents and nonrespondents alike were equally divided between the freight forwarder and NVOCC source listings. No difference was found between respondents and nonrespondents in terms of their geographic location nor between the sample and its parent population. The freight forwarders we interviewed are predominantly small companies. The great majority (94 percent) have single offices and few employees. Nearly one-half have 5 or fewer full-time employees, and 74 percent have fewer than 15. Collectively, our respondents employ a total of about 87,000 people, and they have a total of 490 office locations. Their home offices are located in 25 states and Puerto Rico. They served an estimated 132,000 clients during 1996 and had gross revenues of about $4.6 billion. We did not attempt to verify the accuracy of information, such as the cost of implementing AES, supplied by businesses during our interviews and surveys. The study population for the exporter survey consisted of the companies responsible for the greatest number of paper SEDs and/or those of the highest value filed with the Census Bureau in September 1996. Collectively, these companies filed or had their agents file 34,340 SEDs for exports worth $3.9 billion. The number of SEDs filed by individual filers ranged from 2 to 2,293, and the value of goods exported ranged from about $2.4 million to about $492 million. We obtained from the Census Bureau the names of the top 49 filers in terms of volume of SEDs filed and the top 49 filers in terms of the value of SEDs filed in September 1996. The two lists were combined and purged of duplicates. In addition, foreign embassies and U.S. foreign military sales units were removed from the list. The resulting list contained 80 filers located in 22 states. During the course of the study, we learned that for some companies, a single individual was responsible for one or more additional filers. Multiple cases for a single respondent were combined into a single case, leaving a final study population of 72 filers. Sixty-three of these companies responded to the survey, a response rate of 88 percent. Responding companies accounted for 92 percent of the SEDs filed by the total study population and 88 percent of their total value. To determine whether systematic differences distinguished nonrespondents from respondents, the two groups were compared in terms of the value and number of SEDs filed as well as their geographic location. Independent sample t-tests of the means of SED value and volume revealed no difference between the groups on either dimension. Because the distributions of these variables were nonnormal, a second test, which grouped cases according to whether they fell in the top or bottom half of each distribution, was performed. The comparison revealed no difference between the two groups. The geographic distribution of nonrespondents also paralleled that of respondents. Item nonresponse for reported items ranged from 0 percent to 2 percent for questions asked of all respondents in this survey and from 0 to 5 percent for questions asked only of those planning to use AES. We did our work between November 1996 and August 1997 in Washington, D.C., and in various Customs port locations across the United States, in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Department of Commerce’s letter dated October 30, 1997. 1. Our draft report stated that predeparture filing is a major concern for some segments of the export community, including certain industry groups, airlines and air couriers, and companies that export bulk goods or grain commodities. However, we do not state that most companies we interviewed cited the requirement for predeparture filing as the main reason for not participating in AES. We note that nearly 40 percent of both the freight forwarders and exporters we surveyed reported that they have at least “some” to “very great” difficulty filing SEDs predeparture. In addition, about 40 percent of both groups said they filed SEDs late in 1996. Among companies that reported little or no difficulty filing SEDs predeparture, 28 percent of all ocean freight forwarders and 19 percent of exporters we surveyed said they filed SEDs late in 1996. 2. In commenting on the percent of companies that report only to Census and Customs, Census did not take into account those companies that reported having licensing requirements. Therefore, the statistics they cite are inaccurate. About 61 percent of all ocean freight forwarders, and, of the exporters we surveyed, 32 percent have no license or other agency reporting requirements. 3. We note in our report that AES was designed to serve as a source of export data for agencies with export requirements and reduce redundancies in filing and paperwork associated with various export control requirements. This paperwork includes license application data. We revised the text to make clear that AES was not designed to replace an agency’s authority to regulate exports. 4. We do not suggest that paper filers need to automate their procedures in order to file via AES. Rather, our report lists various options available to companies that want to convert from filing paper SEDs to filing via AES (see p. 4). The following are GAO’s comments on the U.S. Customs Service’s letter dated October 27, 1997. 1. Our report does not state that the majority of the trade community supports full automation or that a majority recognizes the benefit of one-stop filing. Instead, our survey shows that 45 percent of all ocean freight forwarders and 55 percent of the exporters we surveyed cited the convenience of automation as an incentive to use AES. Only 16 percent of ocean freight forwarders and 18 percent of the exporters we surveyed cited one-stop filing as an incentive to use AES. Similarly, our work does not validate that 80 percent of the information required predeparture is in fact available predeparture. We note in our report that representatives from some companies participating in Customs’ 1996 evaluation of AES stated that they believe that 80 percent of the time they have information needed to complete the SED prior to predeparture of the vessel. We did not attempt to determine whether this was an universal view among companies in the exporting community. Conversely, our surveys show that nearly 40 percent of both the freight forwarders and exporters we surveyed reported that they have at least some to very great difficulty filing SEDs predeparture. About 40 percent of both groups said they filed SEDs late in 1996. 2. We note in our report that AES was designed to serve as a source of export data for agencies with various export control requirements and to reduce redundancies in filing and paperwork. This paperwork includes license application data. However, we also state that AES is unlikely to achieve its objective of providing exporters with “one-stop shopping” because most agencies’ export requirements cannot be fully satisfied through AES. We also note that AES will not reduce or eliminate agency paperwork or the electronic filing associated with the issuance of export licenses, certificates, or permits. However, we have revised the text to make clear that AES was not designed to replace an agency’s authority to regulate exports. Daniel R. Garcia, Senior Evaluator Edward J. Laughlin, Senior Evaluator Larry S. Thomas, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the potential impact of the Customs Service's Automated Export System (AES) and the views of the export community regarding AES, focusing on whether AES is likely to achieve its objectives of improving export data, enhancing enforcement efforts, and streamlining export data collection. GAO noted that: (1) it is not yet clear what benefits will result from the use of AES because many critical implementation issues remain unsolved; (2) although AES has the potential to improve export reporting and enhance enforcement efforts, it is unlikely to achieve these objectives unless more exporters are willing to participate and limitations that prevent other agencies from fully using the system are resolved; (3) concerning the trade community's limited participation, GAO found that: (a) only a small fraction of the export community is using AES; (b) most exporting companies responding to GAO's survey are not likely to use AES over the next 3 years; and (c) twenty-five percent of all U.S. ocean freight forwarders had not heard of AES; (4) benefits cited by companies using AES include automated filing, reduced paperwork, personnel, and administrative costs, participating in the initial development of AES, and filing all data at a central filing point; (5) some segments of the trade community contend that the predeparture filing requirement is inconsistent with their business practices and costly; (6) AES is designed to help target illegal shipments, identify high-risk shipments, and compile exporter histories; (7) the system's usefulness as an enforcement tool is limited because: (a) it is not linked with the databases of other law enforcement agencies; (b) a proposal to allow exporters to file data after shipment could undermine efforts to detect export violations; (c) AES allows export data to be transmitted only hours before a shipment departure, which may not provide sufficient time to target possible illegal shipments; and (d) many Customs inspectors anticipate that illegal exporters are unlikely to use AES to file their export data; (8) AES faces limitations in achieving its goal to create a single information collection and processing center for the electronic filing of required export documentation; (9) many export-related agencies are subject to existing regulations requiring them to retain their own licensing procedures and have requirements that will not be satisfied through AES; (10) Customs is attempting to resolve these issues through several means; and (11) a cost-benefit analysis is needed to determine how or whether to proceed with implementation of AES.
The Recovery Act was enacted on February 17, 2009, to help stimulate the United States economy by creating new jobs, as well as saving existing ones, and investing in projects that will provide long-term economic benefits. The Recovery Act requires that the President and heads of the federal agencies manage and expend Recovery Act funds to achieve the act’s purposes as quickly as possible and consistent with prudent management. In addition, the Recovery Act requires contracts funded under the act to be awarded as fixed-price contracts through the use of competitive procedures to the maximum extent possible. The Office of Management and Budget (OMB) issued guidance for implementing the Recovery Act and meeting “crucial accountability objectives” of the act, including, for example: timely awarding of Recovery Act funds; reporting on the use and public benefit of those funds; and ensuring that those funds are used for authorized purposes while mitigating the potential for fraud, waste, error, and abuse. In addition to these objectives, OMB supplemental guidance also provides other goals that agencies are to consider when using Recovery Act funds. Among those goals are investing in efforts that will provide jobs and have long-term public benefits, promoting local hiring, providing maximum practicable opportunities for small businesses, and supporting disadvantaged businesses. The guidance also identifies activities agencies should consider to mitigate risks, including determining what contract award methods will allow recipients to commence expenditures and activities as quickly as possible; providing oversight for non-fixed-price contracts that may be riskier to the government; and reviewing internal procurement rules to promote competition to the maximum extent practicable. Federal agencies using Recovery Act funds on contracts must take a number of new steps related to the solicitation of offers and award of contracts. For instance, to enhance the transparency to the public, the Federal Acquisition Regulation (FAR) was amended to require federal agencies to publicize on www.fedbizopps.gov contract actions that will be funded by the Recovery Act. The description on the Web site of the supplies and services should be clear and unambiguous to support public understanding of the procurement. After awarding a contract using other than fixed-price or competitive approaches, federal agencies are also required to publicize the rationale for doing so on the Web site. In addition, federal agencies should use specific codes when entering Recovery Act contract actions into FPDS-NG to indicate that Recovery Act funds are being used, in whole or in part. The FAR was also amended to implement the Recovery Act requirements that: only American-produced iron, steel, and manufactured goods be used in Recovery Act construction projects; access be provided for Comptroller General and IG audits and reviews of Recovery Act contracts and subcontracts; and whistleblower protections be provided. The act also requires the payment of at least locally prevailing wages to contractor employees working on Recovery Act projects, in accordance with the Davis-Bacon Act. Federal agencies are generally required to obtain full and open competition through competitive procedures when awarding government contracts, unless an exception to competition applies. Some authorized exceptions include when the supplies or services needed by the agency are available from only one responsible source and no other supplies or services will satisfy the agency’s needs; the agency’s need for the supplies or services is of such an unusual and compelling urgency that there would be serious injury if the agency were not permitted to limit the number of sources; or a statute expressly authorizes that the acquisition be made through another agency or from a specified source, such as SBA’s 8(a) program. In most cases, the use of noncompetitive contracting procedures must be properly justified in writing and certified by the appropriate agency official. The competition requirements that apply to federal agencies do not apply to the states, each of which has its own contract competition requirements. Additionally, purchases of supplies or services that are under certain dollar thresholds (usually from $3,000 to $100,000) may be acquired through the use of simplified acquisition procedures. These procedures provide a streamlined approach to procurements as a way to promote efficiency and economy in contracting. While full and open competition procedures do not apply to simplified acquisitions, federal agencies are still required to promote competition to the maximum extent practicable. When using simplified acquisition procedures, federal agencies can solicit from one source if they determine that only one source is reasonably available. Section 8(a) of the Small Business Act authorizes SBA to create a business development program to help small, socially and economically disadvantaged businesses compete in the American economy, including gaining access to the federal procurement market. This program, known as the 8(a) program, authorizes contracting by using procedures other than full and open competition, such as awarding sole-source contracts. Under the 8(a) program, when the anticipated value of a contract is below the “competitive threshold”—$5.5 million for acquisitions involving manufacturing and $3.5 million for all other acquisitions—the contract should be awarded on a sole-source basis to an eligible 8(a) business. Contracts above the competitive thresholds can be awarded based on competition limited only to 8(a) businesses when there is a reasonable expectation that at least two 8(a) businesses will submit offers. Sole- source contracts of any value may be awarded to businesses owned by an eligible Indian tribe or an Alaska Native Corporation. Federal agencies are not required to provide written justification for sole-source contracts awarded under the 8(a) program, but regulations specify percentages of the work that must be performed by the 8(a) business with its own resources. The OMB Recovery Act implementing guidance encourages federal agencies to take advantage of authorized small business contracting programs, which may include the use of noncompetitive contracts, to create opportunities for small businesses. The Recovery Act provided an unprecedented level of funding for programs to be administered within the states at various levels. Recovery Act funds are being distributed to states, local entities, and individuals through a combination of formula and competitive grants and direct assistance. Nearly half of the approximately $580 billion associated with Recovery Act spending programs will flow to states and localities through about 50 state formula and discretionary grant programs as well as about 15 entitlement and other programs. Some of the funds are passed from the federal agencies through state governments to local governments, while other funds are provided directly to local governments or individuals by the federal agencies. As we previously reported, states are taking various approaches to ensure that internal controls are in place to manage risk up front, rather than after problems develop and deficiencies are identified. States have different capacities to manage and oversee the use of Recovery Act funds. Many of these differences result from the underlying differences in approaches to governance, organizational structures, and related systems and processes that are unique to each jurisdiction. To provide state-level oversight of the use of Recovery Act funds, many states appointed an individual or team, often in the governor’s office, to provide overarching guidance and monitoring for the state’s Recovery Act efforts. Since many of the programs and the processes and procedures used to implement them existed before the Recovery Act funds were provided, much of the focus of the state-level oversight efforts has been on the new aspects of the Recovery Act, such as the new recipient reporting requirements and state fiscal stabilization funds. More than two-thirds of the $26 billion that had been obligated on federal contracts through May 2010 was obligated on contracts that were already in place before the Recovery Act. Agencies used mechanisms such as task orders for services, delivery orders for supplies, and contract modifications to add work or funds to existing contracts. For these orders and modifications on existing contracts, the decisions to compete or not compete the underlying contracts predated the Recovery Act. About 89 percent of the Recovery Act funds obligated on pre-existing contracts were coded in FPDS-NG as being competed. Approximately one-third of Recovery Act federal contract obligations through May 2010 was obligated on new contracts. For these contracts, the decisions on whether to compete the contracts were made after the Recovery Act was enacted. As shown in figure 1, most Recovery Act dollars obligated on new federal contracts were on contracts that were competed. The new contracts that were not competed consisted of contracts awarded under the SBA’s 8(a) program, contracts awarded using simplified acquisition procedures, and other contracts that were awarded under authorized exceptions to competition, such as only one source was available or the requirement was urgently needed. Almost 80 percent of the approximately $875 million obligated to noncompetitive new contracts went to businesses under SBA’s 8(a) program. Between both existing and new contracts, almost 90 percent of the $26 billion in Recovery Act contracting dollars through May 2010 were obligated on competitive contract actions. See appendix I for detailed data on the obligations placed on Recovery Act contract actions by all federal agencies. Officials at the five federal agencies we reviewed told us that they chose their contracting approaches to meet their primary goals of obligating Recovery Act funds quickly and to high-priority projects, which sometimes led to using noncompetitive contract actions. The act and guidance from OMB and agency officials directed agencies to obligate Recovery Act funds quickly, creating a sense of urgency on the part of contracting staff. As a result, program and contracting staff identified programs, projects, and contract vehicles that would allow them to obligate funds within short time frames. Contracting officials at some of the agencies we visited told us that they considered both the relative risks of using noncompetitive contracting approaches and the benefits of obligating funds faster than had they awarded new contracts using full and open competition. For example, the U.S. Army Corps of Engineers (USACE) chose construction projects that could be executed quickly by issuing task orders under previously awarded contracts with businesses under SBA’s 8(a) program. Further, contracting officials at USACE also noted that new sole-source contracts to 8(a) businesses typically take about 4 months to award, while a new competitive contract could take 12 to 14 months using full and open competition procedures. As shown in figure 2, most of the Recovery Act funds were obligated within the first two full fiscal quarters in which rs in which the funds were available for obligation. the funds were available for obligation. Officials at several of the selected federal agencies explained that the use of existing contracts allowed them to obligate funds quickly. Whether an existing contract had been competed originally did not influence decisions about which of these contracts to use since the level of competition had already been established prior to the availability of Recovery Act funds. According to agency officials, programmatic priorities and the availability of contracts with the capacity to absorb and effectively use additional funding were the predominant factors in choosing which existing contracts received Recovery Act funds. Use of the 8(a) program to award new contracts allowed agencies to quickly obligate funds without competition as sole-source awards. For certain 8(a) contracts, such as those below $3.5 million, sole-source is the default contracting approach under federal regulations. Contracting officials at each of the federal agencies told us that the 8(a) program allowed them to quickly obligate funds on both new and existing contracts under $3.5 million and that the noncompetitive nature of the contracts was viewed as a trade-off for expediency and the ability to provide opportunities to small businesses. While speed was the primary driver agencies cited for using noncompetitive contracting approaches, noncompetitive awards were also used in a small number of new contracts that we reviewed when there was only one source available for specialized equipment or a specific service. For example, several National Institutes of Health (NIH) contract actions we reviewed were sole-source contracts for specialized medical equipment. In these cases, there was only one manufacturer that could meet the requirements of the contract according to the documentation in the contract files. At the five selected agencies, we found that all of the new noncompetitive Recovery Act contracts that required documented justification and approval for using other than full and open competition had such documentation. For most new noncompetitive Recovery Act contracts, specific documentation to justify the noncompetitive award was not required. However, we found that 21 of the new contracts awarded as of February 2010 at the five agencies we visited required documented justifications. For these 21 contracts, the contract files included the required justification and approval documentation for not using full and open competition. Almost all of the justifications we reviewed authorized a sole-source contract because there was only one responsible source and no other type of supplies of services would satisfy the agency’s requirements. Among these, about half were for purchases of proprietary parts or technology, and most of the others were contracts for utility services. The selected agencies added additional review processes, internal reporting, and coordination steps in response to the Recovery Act. While the measures implemented vary at each of the selected agencies, all have created additional processes to increase management oversight beyond their normal practices. IGs used a risk-based approach to target their initial oversight efforts, and did not specifically target noncompetitive contract actions because IGs did not view them as high risk. At most of the selected agencies, IGs chose to focus on areas and programs they judged to be higher risk, such as grant programs, which accounted for the majority of Recovery Act funding. Alongside IG’s individual efforts, the Recovery Act also established the Recovery Accountability and Transparency Board to coordinate among the IGs and provide additional oversight. The selected agencies used existing processes to award and administer Recovery Act contracts, but they also implemented a number of additional measures intended to provide enhanced oversight. This added oversight was in response to the specific requirements of the Recovery Act and implementing guidance from OMB for greater transparency, speedy execution of projects, maximizing competition in contracting, and other priorities. According to agency officials, additional oversight measures were put in place at the agencywide level, as well as within the agency components that we reviewed. All five of the selected agencies created working groups, committees, or other internal entities with the mission of coordinating each of the agency’s Recovery Act work. Most of these groups deal with a wide range of Recovery Act-related implementation issues and included oversight of contracting as one element of their work. Generally, officials said that they meet on a regular basis—such as monthly or weekly—and provide a venue for officials from across the agencies to provide management visibility into Recovery Act programs, discuss problems that may have arisen, and coordinate approaches by issuing formal or informal guidance. For example, DOD created the Recovery Act Working Group to coordinate implementation across the department. At weekly meetings, representatives from the Office of the Secretary of Defense and the military services provide updates on the status of Recovery Act obligations, projects in progress, relevant IG findings, and other issues. Further, similar Recovery Act coordinating groups are in place within each of the military services. In addition to their primary Recovery Act coordination groups, some agencies also created additional subgroups to coordinate specific aspects of implementation and oversight, such as contracting. For example, HHS established an Office of Recovery Act Coordination to work across the entire agency. As part of that function, HHS established a Recovery Act Coordinators group to hold weekly meetings of key personnel from the various agency operating divisions, allowing centralized collection and distribution of management information. Most agencies reported that they also identified a single individual to take managerial responsibility for implementation and oversight of Recovery Act programs. For example, NASA created the Recovery Act Implementation Executive position responsible for coordinating activities throughout the agency related to the administration of Recovery Act programs. Likewise, at DOD, the Principal Deputy Under Secretary of Defense in the Comptroller’s office leads the Recovery Act Working Group and is responsible for ensuring that the military services are properly administering their Recovery Act-funded programs. Although the selected agencies reported that they awarded Recovery Act contracts through their standard contracting processes, one agency implemented additional pre-award reviews of contract actions. According to NIH officials, NIH implemented an increased review of contracts awarded noncompetitively, which allowed greater visibility into Recovery Act contracts. Typically, NIH management reviews any noncompetitive contract award over $550,000, but NIH procedures for the Recovery Act require management review of all proposed noncompetitive contracts prior to award. Across the five federal agencies, some provided additional review in other ways, such as reviews of selected projects prior to the contract award process. See appendix II for additional details on each agency. Agencies increased the amount of internal reporting of Recovery Act activities, including contracting. In combination with the coordination groups discussed above, this internal reporting was intended to create greater visibility for Recovery Act programs. Agencies increased the amount of data provided directly to agency leadership on contract awards, as well as the frequency at which these data are updated. For example, DOE expanded an existing data system to provide more frequent reporting and performance information to a larger number of users as part of its approach to Recovery Act oversight. The system includes regularly updated financial, earned value management, performance, risk, and job creation data on DOE projects, which are available to agency officials directly and through daily summary reports. Within DOD, USACE established a weekly report to agency leadership on Recovery Act contracting activity, showing obligations, project status, and other information. The additional oversight processes and increased volume of funding under the Recovery Act have put added demands on agency contracting staff, which agency officials said was having some impact on their ability to carry out their missions. The Recovery Accountability and Transparency Board coordinated a survey administered by IGs of contracting and grant officials at 29 agencies regarding the adequacy of contracting and grant staffing levels. Some survey respondents said that staffing was inadequate, while about half of respondents said that staffing was adequate to meet Recovery Act needs but affected non-Recovery Act work. Contracting officials at several agencies whom we met with in our site reviews also reported that there had been an impact on their staff. Officials said that staff had put in extra hours to meet Recovery Act demands, and in one case said that attention to Recovery Act contracts had led to delays on non-Recovery Act contract awards. The Recovery Act provided supplemental funding to IGs to support their oversight of their agencies’ spending under the act. Table 1 shows the funding provided to the IGs for the five selected agencies. IGs for the selected agencies reported that they used assessments of the relative risks, specific to their agencies and programs, of different Recovery Act activities to target their oversight efforts. At three of the five IG offices, these assessments did not result in a focus on contracting. The IGs for all five agencies reported that they used a risk-based approach to structuring their Recovery Act oversight work, but each considered different factors in assessing risk. They all said that the amount of Recovery Act funding received by their agency was a main factor in their focusing on program areas or projects receiving the greatest funding. Most Recovery Act spending was through grants not contracts. Other risk factors used by some of the IGs included problems identified in previous audit work, the level of experience of grant recipients, and contract characteristics such as the level of competition and whether the contract was new or existing. At three of the IG offices, the assessment results showed that Recovery Act contracting was an area of lower risk relative to Recovery Act spending through grants and loans. These offices devoted only a small portion of their Recovery Act audit work toward it. For example, HHS IG officials said that they focused on the agency’s grant programs, in large part because the amount of Recovery Act funding to be spent by HHS through grants was much greater than the amount to be spent through contracts. In addition, the HHS IG’s prior findings showed grants to be a higher-risk area. The officials said that they also took into account the risks posed by increased funding under the Recovery Act. For example, an HHS IG official said that they anticipated that some grant recipients would have little prior experience with federal funds. As a result of this risk assessment, the HHS IG conducted only limited work on contracts. This work involved two reviews that looked at administrative approvals and funding for a selection of contracts at NIH, and concluded that no further reviews were needed. The IGs review their Recovery Act audit plans periodically, generally on a semiannual basis, and revise them as warranted. Contracting under the 8(a) program was not a focus for four of the five IGs, who did not use the 8(a) status of a business as a factor in their selection of contracts for review, and did not review 8(a) compliance issues, such as 8(a) eligibility or limits on the amount of work that can be subcontracted. The DOE IG and HHS IG did not review issues related to 8(a) contracts as a result of their risk assessments, because they did not identify contracting as a high-risk area. DOD IG and NASA IG officials said that they did not focus on issues related to 8(a) contracts beyond the 8(a) contracts they encountered in performing their programmatic reviews, and did not review 8(a) business compliance and eligibility. The eligibility determination is an issue that is within the sole purview of SBA. The SBA IG did review some 8(a) contracts and looked into the reasons specific businesses were chosen. In one of the SBA IG reviews, the resulting report did address the eligibility of two 8(a) businesses and determined that one of the two businesses was not eligible for the contract award under the 8(a) program rules. We recently reviewed the process SBA uses to ensure that 8(a) businesses remain eligible to continue participating in the program, and found inconsistencies and weaknesses in the required annual review procedures. For example, we estimated that SBA staff at five district offices failed to complete the required review for 55 percent of 8(a) businesses. In a separate review, we recently found that $325 million in set-aside and sole-source contracts were awarded to businesses that were not eligible to participate in the program. We also have identified issues with respect to the use of 8(a) businesses that qualify as Alaska Native Corporations. Specifically, we have found that agencies have not always complied with requirements to notify SBA when 8(a) contracts with Alaska Native Corporations are modified, or to ensure that the businesses comply with limits on subcontracting. In contrast to the other three IGs, the DOD IG and NASA IG included reviews of individual contracts as a central part of their oversight. According to DOD IG officials, they chose their approach as a result of their risk analysis. The majority of the department’s Recovery Act spending is through contracts for building construction and renovation. DOD IG officials analyzed data on the services’ planned projects and decided which ones to review based primarily on the size, location, and type of project. The DOD IG with the assistance of the military services’ audit agencies—the Army Audit Agency, the Air Force Audit Agency, and the Naval Audit Service—conducted coordinated reviews of the projects identified through the initial risk analysis. As part of those reviews, auditors gathered additional information on contract actions for the selected projects, including whether they were issued as orders or modifications under existing contracts, whether the contracts were competitively awarded, and whether a surveillance plan was in place. In addition, the DOD IG and military services’ audit agencies collected information on whether contracts for the projects they reviewed were awarded to 8(a) businesses, but the officials said that they did not assess business eligibility because this falls under the jurisdiction of the office within SBA that administers the 8(a) program. However, DOD IG officials told us that if they suspect that a business is not eligible for the 8(a) program, they refer the matter to SBA for review. The only audit work that directly focused on 8(a) businesses, other than the work of the SBA IG noted above, is a review currently being conducted by the Air Force Audit Agency, which is reviewing the eligibility of 8(a) contractors at 10 Air Force installations. As of June 2010, the Air Force Audit Agency had not yet issued its report. As of June 2010, 141 reports had been posted on www.Recovery.gov by the IGs for the five agencies we reviewed. In 43 of the reports, the IGs touched on contracting issues. Of these, 27 were reviews of projects at individual DOD facilities issued by the DOD IG or by the military services’ audit agencies. Most of the IG reports that dealt with contracting did not identify systematic shortcomings in agency processes or Recovery Act contracts. Rather, contracting-related findings ranged from clauses omitted from individual contracts to observations on the completeness of contracting data reported by the agencies. For instance, the Air Force Audit Agency reported in its audit of Elmendorf Air Force Base that while the base’s Recovery Act contracts met several requirements, such as expediting the award process and fostering competition, they had not fully met transparency requirements because the contracting office did not provide sufficient information on the work to be completed for one project on www.fedbizopps.gov. According to Office of the Secretary of Defense and Air Force officials, Elmendorf Air Force Base subsequently reposted the project on www.fedbizopps.gov to more accurately reflect the work accomplished. One IG report, however, noted significant shortcomings in agency contracting workforce capacity. The SBA IG determined that staffing levels in the agency’s contracting office were insufficient. The SBA IG found that because of vacant positions, contracting office staff declined from 13 to 7 personnel from June 2009 to February 2010, at a time when the office’s workload increased as a result of Recovery Act implementation. The report concluded that the current staffing of the contracting office was insufficient to award, administer, and oversee Recovery Act and other contracts, and that as a result, the risk of fraud, waste, and abuse had increased. In our discussions with SBA on the report’s findings, a senior procurement official stated that the agency has experienced further attrition in its acquisition workforce since this report was released. To address this, the agency awarded a contract to provide acquisition services for four contracting positions and plans to contract for services for six more. For further information on how the IGs at each of the selected agencies are conducting Recovery Act oversight, see appendix II. At the state level, we were not able to determine the full extent of the use of noncompetitive contracting. The states we visited collect some aggregate data on contracts awarded by state agencies, but did not maintain data on contracting at the local level where a portion of the contracting activity occurs. These states rely on their pre-Recovery Act contracting policies and procedures, which generally require competition. With respect to oversight, each state has supplemented its state-level guidance with some additional Recovery Act-specific policies and procedures. However, the states do not routinely provide state-level oversight of contracts awarded at the local level, where a portion of the Recovery Act contracting occurs. Representatives of the five state audit organizations said they could address Recovery Act contracting issues through the internal control work performed during the state’s annual Single Audit or during other reviews of programs that involve Recovery Act funds, if contracting is identified as an area of risk. State-level information on the type and amount of data routinely collected on noncompetitive Recovery Act contracts varied in the five states we visited—California, Colorado, Florida, New York, and Texas. Officials in some states said they are collecting or could collect data on noncompetitive contracts awarded by the state agencies. Some of the states we visited currently have some level of statewide information on noncompetitive contracts awarded by their state agencies, but with limitations. Specifically, officials in the states we visited told us the following: California’s statewide contract database does not include contracts awarded by all of its state agencies. Colorado’s statewide contract database does not identify which contracts are funded under the Recovery Act, but noncompetitive Recovery Act contracts are manually reported to the state level. New York’s statewide contract database includes contracts awarded by state agencies, but does not include data on contracts awarded by state authorities, such as the New York State Energy Research and Development Authority. Florida has a statewide contract database, but it is voluntary and not routinely used by all state agencies. Texas’ statewide contract database does not identify which contracts are funded under the Recovery Act. Officials in California, Colorado, and Florida said that some of their state agencies have awarded noncompetitive Recovery Act contracts, while officials in New York said none have been awarded by their state agencies and officials in Texas said they were not aware of any having been awarded. At the state agency level, we discussed the weatherization and education programs with the respective agencies responsible for managing these programs. In all five states, officials from these agencies said that they have some data on Recovery Act contracts awarded by their agencies. Moreover, state officials in all five states explained that they are not required to provide direct oversight of contracts awarded below the state agency level. As a result, they do not collect data on contracts awarded at the local levels by local governments or agencies where a portion of the Recovery Act contracting occurs. The limitations on available contract data, therefore, precluded us from performing an analysis on noncompetitive Recovery Act contracts awarded in the selected states. According to procurement officials in the selected states, the use of competition is generally required when awarding contracts, although exemptions are permitted. Each of the selected states permits exemptions to competition when contracts are awarded to another government entity, and most also permit exemptions when responding to emergencies and when only one provider is available. In the selected states in which state- level officials were aware of the award of noncompetitive Recovery Act contracts, officials said those awards were made between government agencies or to sole-source providers. For example, an agency in one state contracted with a university to provide training, and an agency in another state contracted with businesses that were the sole providers of proprietary scientific equipment. Each of the five states provides oversight of the award of Recovery Act contracts to varying degrees. According to officials, each state uses a combination of policies and procedures that existed prior to the Recovery Act and some additional measures to oversee these awards. Each state supplemented its existing contracting procedures with new guidance and had state agencies that realigned or hired staff to implement Recovery Act requirements. State officials explained that under existing state procedures, agencies are required to prepare justification documentation and obtain approval before they award noncompetitive contracts. In addition, state officials told us that generally state agencies are responsible for oversight of contracts their agencies award, while local entities have oversight responsibilities for contracts awarded at the local level. For example, Colorado officials approve local agencies’ procurement processes, but the local agencies acquire weatherization materials on their own using a competitive bid process. Most Recovery Act funds to local governments flow through existing federal grant programs, while some of the funds are provided directly to local governments by federal agencies and others are passed from the federal agencies through state governments to local governments. Therefore, state officials have limited insight into contracts awarded at the local level. In California, for example, state education officials said the size of the state and its more than 1,600 local education entities made it impractical to track local contracts. Nonetheless, officials in the selected states can perform postaward reviews related to contract competition on an as-needed basis. Officials in some of the states we visited said that they did not receive additional resources to provide oversight of Recovery Act funds. To provide additional oversight, they sometimes shifted resources to handle Recovery Act work, which at times entailed shifting resources from non- Recovery Act to Recovery Act work. Representatives of the five states’ audit organizations said that their organizations could provide additional oversight of the states’ use of Recovery Act contracting funds through the internal control work performed as part of the states’ Single Audits, and some explained that this could also be done through separate programmatic reviews if contracting is identified as an area of risk. Although contract competition is not the singular focus of the Single Audit, it nevertheless may be included as part of the internal control testing for a given program. For example, funding for weatherization programs, which increased from the pre-Recovery Act level in the selected states, falls under the Single Audit requirements. According to Florida state officials, their weatherization program funding increased from about $1.3 million before the Recovery Act to an average of $58.7 million per year over a period of 3 years. With respect to noncompetitive contracts, the audit organizations for some of the states we visited had not identified noncompetitive contracts as a risk area and did not plan any audits specifically targeted at this contracting method. Audit organization representatives in each of the five states we visited said that they were in the process of conducting reviews of some Recovery Act programs but the focus of these audits is not on noncompetitive contracts; however, they also noted that these audits could address procurement and contracting issues should they surface during the course of the audits. At the state level—unlike the federal level—Recovery Act funds were not specifically set aside for state audit organizations to provide oversight of the use of Recovery Act funds. To focus their resources, some state audit organizations have performed risk assessments of state agencies and are planning additional programmatic reviews. These state audit organizations used risk assessments to identify programs for potential review and, in some states, to maximize the use of limited auditing resources. State audit officials told us that the factors considered in their risk assessments included dollar values of programs, previous audit findings, internal control weaknesses identified as a result of the Single Audits, whether the program was new, or whether a program received large increases in funding. As we previously reported, recent budgeting challenges for state governments have reduced staffing levels and audit organizations have not been spared from budget reductions that could limit their capacity to perform audits involving Recovery Act funds. At the federal level, available data were sufficient for us to determine the extent to which agencies used competition for Recovery Act contracting, the reasons selected agencies chose not to use competition, and their approaches to contract oversight. In general, congressional and administration direction to obligate Recovery Act funds quickly led agencies across the government to rely heavily on existing contract vehicles to get work under contract. Most of these existing contracts, as well as most new contract actions, were competitive. Federal agencies have added additional oversight procedures, internal reporting, and coordination in response to Recovery Act requirements. Federal agency IGs focused their initial oversight efforts on areas they determined to be higher risk and did not target spending under contracts, including noncompetitive contracts. While this approach may have been justified initially given competing priorities and the relatively small percentage of obligations spent on noncompetitive contract actions, the result is relatively little audit coverage of Recovery Act contract actions under SBA’s 8(a) program. This is significant for two reasons. First, the 8(a) program accounts for the overwhelming majority of noncompetitive contract obligations under the Recovery Act. Second, our prior work, some of which is quite recent and was not available to the IGs when they prepared their audit plans, has shown that safeguards designed to ensure that the program operates as intended—requiring checks on participant eligibility and limits on subcontracting—are not always implemented effectively. While we recognize that the Recovery Act guidance encourages contracting with small businesses, there is an opportunity for the IGs to reassess whether they need to focus additional audit resources on contracting under the 8(a) program, which accounts for nearly 80 percent of the new noncompetitive contract actions under the Recovery Act. At the state level, we were not able to determine the full extent of the use of noncompetitive contracting. The five states we visited collected some aggregate data on contracts awarded by state agencies, but did not maintain data on contracting at the local level where a portion of the contracting activity occurs. As a result, we could not analyze the extent of noncompetitive Recovery Act contracting within these states. With respect to oversight, each state has supplemented its state-level guidance with some additional Recovery Act-specific policies and procedures but does not routinely provide state-level oversight of contracts awarded at the local level. State audit organizations for the selected states are focusing their audit resources on programmatic reviews rather than focusing on the use of noncompetitive Recovery Act contracts, consistent with their assessments of relative risk. As the IGs of the five agencies we reviewed periodically revisit and revise their Recovery Act audit plans, they should assess the need for allocating an appropriate level of audit resources, as determined using their risk- based analyses, to the noncompetitive contracts awarded under SBA’s 8(a) program. We provided a draft of this report to DOD, DOE, HHS, NASA, SBA, and their respective IGs for comment. We received e-mail comments from DOD, HHS, and NASA, as well as the DOE IG and SBA IG, in which the agencies all generally agreed with the report’s findings and recommendation or had no comments. In some cases, the agencies provided technical comments or clarifying information, which we incorporated into the report as appropriate. We received written comments from SBA as well as the DOD IG, DOE IG, and NASA IG. The DOD IG provided the department’s official comments and agreed with the draft report and its recommendation. The DOE IG noted that DOE is one of the most contractor-dependent agencies in the government and that the DOE IG routinely considers 8(a) program contracts in its audit work. We consider the DOE IG’s audit approach to be consistent with the intent of our recommendation. The NASA IG agreed with the draft report and its recommendation and noted that it is planning work on a number of Recovery Act contracts involving 8(a) program businesses. In its written comments, SBA noted its concern about our findings and recommendation regarding the 8(a) program. Specifically, SBA was concerned about what it viewed as our draft report’s attempt to link the legitimate use of the 8(a) program with the results of a previous GAO report that found ineligible businesses receiving contracts under the program. SBA was also concerned that our report might be suggesting that use of the 8(a) program was either inappropriate or a risky procurement choice. We did not intend to suggest that there was anything improper with agencies deciding to use the 8(a) program in implementing the Recovery Act. In fact, our report points out that OMB’s Recovery Act guidance specifically lists providing opportunities for small businesses to the maximum extent practicable and supporting disadvantaged businesses as goals for agencies using Recovery Act funds. We mentioned our prior findings regarding 8(a) eligibility only to illustrate that there may be issues that merit consideration by agency IGs as part of their overall approach to audits related to Recovery Act contracts that were not apparent when they developed their Recovery Act audit plans. We also provided a draft of this report to representatives within the states of California, Colorado, Florida, New York, and Texas for comment. We received e-mail comments from various officials within the states of California, Colorado, Florida, and New York, including some of the state audit organizations, in which they generally agreed with the report’s findings or had no comments. Some state officials provided technical comments or clarifying information in their e-mails, which we incorporated into the report as appropriate. We received written comments from the states of Florida and Texas. Florida generally agreed with the report’s findings. Texas provided a proposed factual addition and a technical comment, which we incorporated as appropriate. Texas also made an observation that Congress had not provided funds for state oversight of Recovery Act funds. Although the Recovery Act did not provide such funds, as noted in footnote 28 there is guidance from OMB that could permit reimbursement of such state expenses under specified circumstances. The written comments are reprinted in appendixes IV through IX. We are sending copies of this report to interested congressional committees, as well as the Secretaries of the Departments of Defense, Energy, and Health and Human Services; the Administrators of the National Aeronautics and Space Administration and the Small Business Administration; and the Inspectors General of these five agencies. In addition, we are sending the report to officials in the five states covered in our review. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix X. uisition Regulation (FAR), including, for example, sole-source contracts awarded under SBA’s (a) program as well as contracts awarded without competition under simplified acuisition procedures. Totals exclude 456 Recovery Act contract actions for which the extent of competition was not recorded in FPDS-NG. These actions represent a total of $324,304,140 in Recovery Act obligations. DOD’s mission is to provide the military forces needed to deter war and to protect the security of our country. The mission of USACE, one of DOD’s construction agents, is to provide vital public engineering services in peace and war to strengthen our nation's security, energize the economy, and reduce risks from disasters. DOD received approximately $7.4 billion in defense-related appropriations under the Recovery Act, with an additional $4.6 billion appropriated to USACE for its Civil Works Program. According to DOD’s Recovery Act plan, about 88 percent of its non-USACE Recovery Act funding is for facilities infrastructure. This includes DOD’s Facilities Sustainment, Restoration, and Modernization program, the Military Construction program, and the Energy Conservation Investment Program. The remaining funds are for the expansion of the Homeowners Assistance Program providing assistance to military and civilian families and the Near Term Energy-Efficient Technologies program. Recovery Act funds for USACE are allocated to various business programs under the Civil Works Program including emergency management, environment and environmental stewardship, flood risk management, hydropower, navigation, recreation, regulatory, and water storage for water supply. DOD program areas receiving Recovery Act funding are listed in table 2. As of May 2010, DOD (including USACE) obligated more than $7.5 billion of Recovery Act funds on contracts. DOD obligated about two-thirds of its Recovery Act funds in the last two quarters of fiscal year 2009, from April through September 2009. Figure 3 shows DOD obligations of Recovery Act funds through contracts by fiscal quarter. Most of the funds that DOD obligated under Recovery Act contract actions were on existing contracts, as shown in figure 4. Of those funds obligated on new contracts, most were obligated to competitively awarded contracts. Approximately 17 percent of obligations on new contracts were obligated to noncompetitively awarded contracts, most of which were awarded to 8(a) program small businesses. We selected 67 noncompetitive contracts, task orders, or modifications for review at the USACE Sacramento District. Most of these actions were placed under existing indefinite delivery/indefinite quantity (IDIQ) contracts that had been awarded to 8(a) program businesses. Sacramento District contracting officials told us that they typically award IDIQ contracts to 8(a) program businesses for smaller-dollar projects as part of their regular business processes. These contract vehicles can then be used to quickly place orders for individual projects within the scope of the contract until the total value of the contract approaches the $3.5 million threshold for noncompetitive 8(a) program awards. About half the dollars obligated under the Recovery Act by the Sacramento District—over $53 million—were used to accelerate funding of an existing project to relocate train tracks in Napa, California as part of a flood control project. This action is considered noncompetitive because the original contract was awarded sole-source to an Alaska Native Corporation in 2008, prior to the enactment of the Recovery Act; the contract was modified in 2009 to add Recovery Act funds. According to USACE officials, the Recovery Act funding accelerated the completion of the flood control project, which also decreased the total cost of the project. Some of the Recovery Act orders at Sacramento District were administered by USACE on behalf of other DOD components, such as the Army and Air Force. For instance, USACE placed an order on an existing IDIQ contract with an 8(a) program business for work on ventilation controls in buildings at Beale Air Force Base in Roseville, California. Table 3 provides additional details on some noncompetitive contract actions we reviewed at USACE Sacramento District. These examples illustrate the variety of services and supplies being acquired, the amount of Recovery Act funding used, and the reason a contract action was not competed. Using FPDS-NG data as of February 19, 2010, we identified 16 DOD contracts that required documented justification and approval for using other than full and open competition. Our review of these justification documents found that they included information to support the stated reason for a noncompetitive award. The most common reason, cited in 15 of the contract files, was that only one source was able to provide the product or service. Within this group, about half were contracts for utilities such as water service, while most of the others within this group were for proprietary equipment or technology that could only be provided by one business. For instance, one contract was for the purchase of replacement parts for a hydraulic system at a USACE dam. The justification stated that the contract was awarded without competition because the original manufacturer of the equipment is the only available source of replacement parts. Agency Contract Oversight DOD efforts to provide oversight and transparency for Recovery Act activities include internal coordination, increased reporting to management, and recipient reporting. Coordination: The Office of the Secretary of Defense (OSD) assigned the Principal Deputy Under Secretary of Defense within its Comptroller’s office responsibility for Recovery Act oversight and coordination at the department level. OSD also established the Recovery Act Defense Department Working Group, which holds a weekly meeting that includes representatives from each of the services; the IG’s office; the small business coordinator; the Acquisition, Technology and Logistics office; and other entities within DOD. According to officials, the working group’s discussions cover a variety of Recovery Act issues at a high level, some of which are specifically contracting-related, such as contract obligations and updates on specific programs. Reporting: At the OSD level, information on Recovery Act activities, including contracting, is gathered from the individual services and FPDS- NG and compiled in the Business Enterprise Information System, which enables management to oversee DOD’s Recovery Act programs across all three services. For instance, the system includes data on contract obligations and estimated completion dates for DOD Recovery Act projects, and is updated continually. Individual DOD components have also implemented additional management reporting—for instance, USACE generates a weekly report for its leadership on the progress of Recovery Act projects. Additional review: DOD did not create any additional levels of pre-award approval at the department level; contracting is administered by the individual services. USACE did not implement any additional levels of pre- award approval for Recovery Act contracts. Issues: OSD officials said that no schedule or cost overrun issues have come to their attention. The only contract-related problem that they have had to address at the department level has been with recipient reporting and ensuring that recipient reports are filed by the contractors and are accurate. Risk assessment: When designing its Recovery Act audit approach, DOD IG used data on individual DOD projects to assess risk and focus its efforts. The risk assessment ranked individual projects, incorporating the dollar value of the contracts, project type, location, and contract characteristics, such as the level of competition, as risk factors. DOD IG initially selected the 83 highest-risk projects based on these criteria. Once on-site reviews began, the information gathered was used to further refine the risk assessment criteria and select some additional projects. Audit Approach: The DOD IG established a three-phase review of Recovery Act-related activities. Phase 1, review of DOD and program-specific Recovery Act implementation plans, has been completed. These reviews found that the DOD and program plans met Office of Management and Budget (OMB) standards, although the DOD IG called for additional detail regarding how the agency arrived at its projections of the proportion of contracts that would be awarded competitively. Phase 2 is a review of the implementation of the Recovery Act programs, focusing on the projects based on the results of the risk assessment. DOD IG identified sites to visit for the Facilities Sustainment, Restoration, and Modernization and Military Construction programs. The DOD IG’s reviews within each military service are being conducted in cooperation with the respective military service audit agencies. As part of this work, DOD IG and audit agency staff review the extent of competition and the related documentation for selected contracts. The Air Force Audit Agency is also conducting some additional Recovery Act reviews beyond those it is conducting on behalf of the DOD IG. This work is ongoing. In Phase 3, which is not yet underway, the DOD IG will provide oversight of the construction of the projects, ensure that all required reporting is taking place, and review the results of the projects. Findings: As of June 9, 2010, the DOD IG and military service audit agencies had posted reports on about 27 individual site reviews on www.Recovery.gov. These reports have found management of Recovery Act contracting to be generally good, although they suggest areas for improvement at some specific installations, such as ensuring that all Recovery Act-related clauses are included in every contract, or developing a plan to manage recipient reporting. DOE works to advance the national, economic, and energy security of the United States; to promote scientific and technological innovation in support of that mission; and to ensure the environmental cleanup of the national nuclear weapons complex. DOE received approximately $36.7 billion in funding under the Recovery Act. Of this, $32.7 billion was for the award of grants and contracts. However, many programs involved comparatively little contracting by DOE—for instance, the Weatherization Assistance Program ($5 billion) provided grants to states. By contrast, funding for cleanup of nuclear sites ($6 billion) is spent primarily through contracts. DOE program areas receiving Recovery Act funding are listed in table 4. As of May 2010, DOE had obligated more than $7.1 billion of Recovery Act funds through contracts. Most of the DOE Recovery Act contracting funds to date were obligated within the last two quarters of fiscal year 2009, from April through September 2009. Figure 5 shows DOE obligations of Recovery Act funds through contracts by fiscal quarter. Nearly all—almost 100 percent—of the funds that DOE obligated under Recovery Act contract actions were on existing contracts, as shown in figure 6. About 97 percent of all Recovery Act funds at DOE were on contract actions coded in FPDS-NG as awarded competitively. However, among the small amount of funds obligated through new contracts, 92 percent were obligated on noncompetitively awarded contracts. Of the 16 contract actions we reviewed at DOE’s Environmental Management Consolidated Business Center, all were orders or modifications on existing noncompetitive contracts. Several added funding to existing remediation projects for sites with radioactive contamination. For example, $1.9 million in Recovery Act funds were obligated on a contract for environmental remediation for the Uranium Mill Tailings Remediation Action in Moab, Utah. Other contracts were for administrative support and involved smaller amounts of Recovery Act funds. For instance, DOE issued an order on an existing contract for monitoring and reporting support. Table 5 provides additional details on some noncompetitive contract actions we reviewed at the Environmental Management Consolidated Business Center. These examples illustrate the variety of services and supplies being acquired, the amount of Recovery Act funding used, and the reason a contract action was not competed. In a review of FPDS data as of February 19, 2010, we did not identify any new noncompetitive DOE contracts requiring a documented justification and approval for being awarded noncompetitively. DOE efforts to provide oversight and transparency for Recovery Act activities include internal coordination, increased reporting to management, and recipient reporting. Coordination: DOE created the Senior Advisor position in the Office of the Secretary of Energy charged with overseeing Recovery Act implementation. This official leads the Office of the Recovery Act, which holds regular meetings with key officials from each of the agency’s program and functional divisions. These meetings were held daily in the first months of Recovery Act implementation and are now held weekly. According to agency officials, a primary goal of these coordination meetings is to create strong links between the work of program offices and that of the functional offices, such as contracting, that support the programs. Topics of discussion at these meetings include the status of ongoing projects, areas of Recovery Act implementation identified as lagging, and other issues raised through review of agency data or by meeting participants. Officials said that Recovery Act coordination teams have also been established within individual DOE functional offices. Reporting: DOE increased the amount of internal reporting as part of its Recovery Act oversight. An internal system, iPortal, reports detailed financial, earned value management, performance, risk, and job creation data on DOE projects. This system had already been in place, but was expanded for the Recovery Act to support more frequent reporting, performance dashboard displays, and an increased number of users from across the agency. The iPortal system generates automated daily and weekly reports to agency officials on key aspects of Recovery Act implementation; officials also use it to browse data on individual programs and projects. In addition, officials said that each program participates in a quarterly review of Recovery Act performance. Additional review: According to DOE officials, all projects receiving Recovery Act funding had to be approved by the program office, the Office of the Recovery Act, the Under Secretary, and the Secretary. The projects were also reviewed and approved by OMB before contract performance could begin. After these projects completed this review process, DOE did not impose any additional levels of pre-award contract review beyond its normal processes, according to officials. Issues: Agency officials said that they had not encountered any notable problems in implementing Recovery Act contracts. Risk assessment: According to DOE IG officials, the DOE IG’s Office of Audit Services conducts an annual risk assessment, and in response to the Recovery Act, the office incorporated its programs into the existing process. Officials said that this assessment includes collective judgment of risks and vulnerabilities from the DOE IG’s previous audit work, and combines these risks with other factors such as the level of funding. DOE IG officials said that they were familiar with existing remediation contracts through their prior work, and determined that adding additional funding to them was not high risk. Audit approach: DOE IG created a tiered approach to oversight of Recovery Act funds. Because the areas identified in the risk assessment do not emphasize contracting, only portions of the audit approach include contracting. Tier 1: Review the department’s internal control structure and management of the most significant programs (those exceeding $500 million) under the Recovery Act. Tier 2: Examine the efficiency and effectiveness of the department’s distribution of funds to primary recipients such as state and local governments. Tier 3: Examine the use of funds by contract and grant recipients through transaction testing. Because grants represent a larger share of DOE Recovery Act funds, DOE IG officials said that grant programs have been the focus of the majority of their reviews. Findings: DOE IG has released seven Recovery Act-related reports that address contracting issues. Most of these are not direct reviews of the agency’s Recovery Act spending, but rather address previously identified management issues that the DOE IG determined could have an impact on the agency’s Recovery Act programs. For example, the DOE IG issued a report on the agency’s management of contract fines, penalties and legal costs, and noted the potential impact on Recovery Act implementation. HHS’s mission is to enhance the health and well-being of Americans by providing for effective health and human services and by fostering strong, sustained advances in the sciences, underlying medicine, public health, and social services. The Recovery Act provided over $145 billion to HHS of which the agency has allocated over $90 billion (63 percent) to improving and preserving health care. Over $25 billion or 18 percent will be used for health information technology. Spending on children and family services and scientific research and facilities make up most of the remaining funds. As of June 30, 2010 HHS has obligated over $87 billion of its Recovery Act funds, including nearly $1.3 billion in contracts and orders. HHS program areas receiving Recovery Act funding are listed in table 6. Recovery Act contract obligations peaked in the fourth quarter of fiscal year 2009 at $752 million. These obligations have been below $300 million in each subsequent quarter. Figure 7 shows HHS obligations of Recovery Act funds through contracts by fiscal quarter. Most of the funds that HHS obligated under Recovery Act contract actions, about 83 percent, were obligated on existing contracts as shown in figure 8. Of the funds used for new contract actions, 76 percent were obligated on contracts that were competed. Of the obligations on noncompetitive new contract actions, 58 percent were on actions awarded noncompetitively because of the urgency of the agency’s need, 22 percent were on actions for which only one source was available, 9 percent were on actions awarded noncompetitively under SBA’s 8(a) program, and 2 percent were on actions noncompetitively awarded under simplified acquisition procedures. We selected NIH for our contract file review as it had the largest amount of noncompetitive Recovery Act actions in numbers and dollars. The most common reason for not competing the award of a contract was that there was only one source available. This occurred on contracts for new medical and laboratory equipment for which only one business could meet the requirements of the contract. Only one source available was listed on contracts for equipment and software upgrades. In these cases, the program and contracting offices decided that it was more practical to upgrade the existing equipment than it was to purchase new equipment. These upgrades were only available through the manufacturer of the equipment and were therefore not competed. The contract files included market research that did not identify alternative sources or comparable price quotes for similar items. Table 7 provides additional details on some noncompetitive contract actions we reviewed at NIH. These examples illustrate the variety of services and supplies being acquired, the amount of Recovery Act funding used, and the reason a contract action was not competed. Using FPDS-NG data as of February 19, 2010, we identified four contracts at HHS—three awarded at the Centers for Disease Control and Prevention and one at NIH—that required a documented justification and approval for using other than full and open competition. In each case, the contractor was selected on a noncompetitive basis because there was only one source available that could fully meet project requirements. For example, on an NIH contract for the upgrade of a system that stores pictures generated by medical imaging devices, it was determined that the incumbent contractor was the only source capable of meeting the contract requirements as it had important institutional knowledge and access to a proprietary system, and no other sources could be found. While one other source offered a competing proposal, it was to replace the system rather than upgrading the existing system, a less-cost efficient and time- consuming alternative, according to agency officials. Agency Contract Oversight HHS efforts to provide oversight and transparency for Recovery Act activities include internal coordination, increased reporting to management, and recipient reporting. Coordination: HHS has established an Office of Recovery Act Coordination (ORAC) which coordinates with relevant business management functions, such as public affairs, grants and contract management, financial management, budget, planning and evaluation, information technology, and the Office of the General Counsel. It also coordinates with the offices that manage appropriated funds and programs authorized under the Recovery Act. In addition to acting as the central repository for data, policies, and procedures related to the Recovery Act, ORAC prepares executive-level reports that portray the overall status of Recovery Act implementation based on individual project and activity plans. ORAC also identifies the key tasks, milestones, and activities for each project plan that require coordination with HHS program and business functions. Additional review: NIH has established a process early in the acquisition planning stage for contracts using Recovery Act funds whereby a summary of the requirement, including any justifications for noncompetitive acquisitions, is reviewed and approved by various senior representatives to ensure that the requirement meets the intent of the Recovery Act and that the justification is supported. This document is called a Proposed Recovery Act Contract Action Approval Form. NIH contracting staff use a checklist in each contract to ensure that the files are complete and comply with Recovery Act requirements. NIH also developed detailed guidance that complements and expands guidance issued by OMB. All contract actions at NIH funded in whole or in part by the Recovery Act are subject to this guidance. Included in this guidance are additional oversight mechanisms and measures related to use of noncompetitive acquisitions. The Recovery Act provided the HHS IG with $17 million in funding for oversight and review and an additional $31,250,000 for ensuring the proper expenditure of funds under Medicaid. As of May 2010, the HHS IG has used $4.8 million of these funds. According to the HHS IG, internal risk assessments determined that the areas of greatest risk were the grant awards of the Administration for Children and Families (which is administering grant funds for expanded Head Start programs, among other programs) and the Health Resources and Services Administration, particularly those related to community health center grants. Accordingly, HHS IG officials are focusing their oversight efforts on these agencies. By contrast, HHS IG officials determined that contracting activities, such as those we reviewed at NIH, are of comparatively lower risk. Efforts are presently focused on the identified high-risk departments and programs. While the HHS IG plans to review Recovery Act spending at colleges and universities in fiscal year 2011, these reviews will focus on compliance with grant terms. NASA’s mission is to pioneer the future in space exploration, scientific discovery and aeronautics research. NASA received approximately $1 billion in Recovery Act funds, 80 percent of which were used for Science and Exploration programs, 15 percent for Aeronautics programs, and 5 percent for cross-agency support programs which include restoration of NASA-owned facilities damaged by hurricanes and other natural disasters that occurred during calendar year 2008. NASA program areas receiving Recovery Act funding are listed in table 8. Nearly half of NASA’s Recovery Act contracting funds were obligated in the fourth quarter of fiscal year 2009. Figure 9 shows NASA obligations of Recovery Act funds through contracts by fiscal quarter. Most of the funds that NASA obligated under Recovery Act contract actions, about 89 percent, were obligated on existing contracts as shown in figure 10. Of the funds obligated for new actions, over 79 percent were obligated on contracts that were competed. For the noncompetitive new contract obligations, 64 percent were on actions awarded noncompetitively under SBA’s 8(a) program, 33 percent were on actions awarded noncompetitively because there was only one source available, and 3 percent were on actions noncompetitively awarded under simplified acquisition procedures. We reviewed 10 noncompetitive Recovery Act contract actions awarded by the NASA Johnson Space Center (JSC). The largest single obligation of Recovery Act funds that we reviewed at NASA was a $15 million modification (change order) to an existing noncompetitive contract in support of Common Docking Adapter development for the International Space Station. Six contract actions in our sample were new contracts to 8(a) program businesses to provide a variety of construction services, repair services, or both at JSC. NASA cited the Recovery Act guidance directing agencies to take advantage of any authorized small business contracting program as its reason for selecting these businesses. Prior to selecting these businesses, the agency performed market research and coordinated with SBA to identify a potential pool of 8(a) program businesses. NASA then held capability briefings with those businesses from which award selections were made. Finally, there were three orders using an existing, originally noncompetitive contract to an 8(a) program business for construction oversight administration services at JSC. Table 9 provides additional details on some noncompetitive contract actions we reviewed at JSC. These examples illustrate the variety of services and supplies being acquired, the amount of Recovery Act funding used, and the reason a contract action was not competed. In a review of FPDS data as of February 19, 2010, we identified one new NASA contract that required a documented justification and approval for use of a noncompetitive award. According to the justification for this contract, only one source was available for specific electronic systems because only one business had developed a spaceflight-appropriate version of the technology. Agency Contract Oversight NASA efforts to provide oversight and transparency of Recovery Act- funded efforts include internal coordination, issuing guidance to the procurement community on the implementation of the Recovery Act, a prohibition on commingling of funds, greater reporting to senior management, and recipient reporting. There are weekly meetings of NASA oversight and contracting officials to coordinate Recovery Act efforts. In addition, the agency developed an internal online file management system that stores Recovery Act-related contract files and can be accessed by agency officials. NASA issued Procurement Information Circular 09-06E to provide guidance to the procurement community on the implementation of the Recovery Act. The guidance provides instruction on a range of Recovery Act contracting topics including requisition requirements for initiating procurement actions, pre-award considerations and contracting officer responsibilities, posting and reporting requirements for contract actions, inclusion of new FAR clauses, instructions specific to construction contracts, and contractor invoicing procedures, among others. The circular also includes NASA’s process for reviewing contactor reporting under the Recovery Act. According to officials, the NASA IG is reviewing Recovery Act contract actions at selected NASA centers as appropriate; this will include two types of audits, one of the administration and implementation of the contract award and another of the performance of the contractor. Officials reported that the initial administrative audits of Recovery Act contract actions through November 2009 are complete at a number of the centers including Johnson, Goddard, Langley, and Ames. As of June 2010, one contractor performance audit had been conducted. On July 1, 2010, the NASA IG issued a draft report on the combined administrative audits for NASA management’s review and comment. The NASA IG is releasing staggered performance reports and may issue a capping report, as necessary. The NASA IG conducted an initial review of the final NASA Agency-Wide Recovery Act Plan and identified several compliance issues with respect to fulfilling requirements of the OMB guidance. According to the NASA IG memorandum, NASA’s Agency-Wide Recovery Act Plan provided insufficient detail about the agency’s broad Recovery Act goals in terms of outputs, outcomes, and expected efficiencies. In addition, the plan did not include a projection of the expected rate of competition nor a rationale for those numbers, as required by OMB guidance. Lastly, the plan did not address the use of fixed-price contracts as a percentage of all dollars spent or describe the steps planned to maximize the use of fixed-price contracts where practicable for Recovery Act-funded contracts. The memorandum was submitted to NASA on December 17, 2009. In NASA management’s response, received January 5, 2010, the Recovery Act Implementation Executive stated the agency concurred with the observations noted in this memorandum. According to NASA management’s response, at the time that the Agency-Wide Recovery Act Plan was due for submission to OMB, Congress had not concurred with NASA’s proposed activities. NASA indicated in its plan that it would provide this additional information with plan updates. SBA’s mission is to maintain and strengthen the nation’s economy by aiding, counseling, assisting, and protecting the interests of small businesses. The Recovery Act provides $730 million to SBA that the agency is using to expand its lending and investment programs so that they can reach more small businesses that need help. While most of SBA’s Recovery Act funds are used for loan programs, contracts are being awarded for equipment and services to support these programs. Specifically, SBA has allocated $20 million for improving technology. Most of the contract dollars are being spent in this area. SBA program areas receiving Recovery Act funding are listed in table 10. Through May 2010, SBA has obligated approximately $11 million of its Recovery Act funds on contracts. SBA’s quarterly obligations have fluctuated. According to an SBA procurement official, this was generally because of the award of large, individual contracts. Figure 11 shows SBA obligations of Recovery Act funds through contracts by fiscal quarter. SBA’s use of existing and competed contracts was very different from the other agencies we reviewed. Most of the funds that SBA obligated under Recovery Act contract actions, about 76 percent, were obligated on new contracts, as shown in figure 12 below. For the noncompetitive new contract obligations, 76 percent were on actions awarded noncompetitively under SBA’s 8(a) program, 3 percent were on actions awarded noncompetitively under simplified acquisition procedures, and 3 percent were on actions awarded noncompetitively because there was only one source available. Two percent of new contracts were awarded competitively. $6. SBA is primarily using Recovery Act contracts to train, supply, and equip staff to support other Recovery Act-related activities. Most of SBA’s Recovery Act contract dollars were obligated on contracts to 8(a) program businesses. Consistent with the fact that agencies are not required to justify in writing the use of noncompetitive contracting procedures for 8(a) program contracts, these contract files were not required to contain a justification document related to awarding a noncompetitive contract. However, the files contained documentation that described the use of the 8(a) program and included competitors’ quotes to establish price reasonableness. Table 11 provides additional details on some noncompetitive contract actions we reviewed at the SBA. These examples illustrate the variety of services and supplies being acquired, the amount of Recovery Act funding used, and the reason a contract action was not competed. In a review of FPDS data as of February 19, 2010, we did not identify any new, noncompetitive SBA contracts requiring a documented justification and approval for being awarded noncompetitively. Agency Contract Oversight SBA efforts to provide oversight and transparency for Recovery Act activities include increased legal review of contract awards and recipient reporting. SBA has experienced a significant decrease in its acquisition workforce and has contracted out for contract specialists. SBA includes a legal review for all Recovery Act contract awards. This review is not required for every non-Recovery Act award. The SBA IG has received $10 million in Recovery Act funds for oversight. The SBA IG’s Recovery Act Oversight Plan highlighted numerous efforts related to SBA’s contract administration practices, and oversight of Recovery Act loans and grants. In the contracting area, the SBA IG’s focus was on examining the award and administration of $20 million in information technology contracts, and evaluating the adequacy of SBA’s acquisition workforce, expenditure controls, and reporting of contract actions. In October 2009, the SBA IG added three staff members to its contract audit group to provide additional audit coverage of the procurement function. The SBA IG has issued a memorandum to SBA’s acquisition office regarding their dramatic shortages in acquisition staff noting that the staff decreased from 13 to 5 staff members in a short period of time, straining the acquisition office’s ability to issue and provide oversight of Recovery Act contracts. The SBA IG issued a report noting that there are numerous discrepancies in the way that actions are being recorded in the FPDS-NG. The SBA IG also issued another report that identified problems with acquisition planning and eligibility for 8(a) program businesses associated with two contracts for the Customer Relationship Management suite of applications (see table 11). GAO was asked to examine noncompetitive contract awards under the American Recovery and Reinvestment Act of 2009 (Recovery Act). In response, we conducted a review to determine: the extent to which Recovery Act funding was spent using contracts, and to what extent these contract actions were awarded noncompetitively; the reasons selected federal agencies awarded noncompetitive Recovery Act contracts; the extent of oversight of Recovery Act contract actions at selected federal agencies; and state officials’ level of insight into the use of noncompetitive Recovery Act contracts within selected states. We analyzed Federal Procurement Data System—Next Generation (FPDS- NG) data to determine the extent to which Recovery Act funding was obligated through contract actions across the federal government. We determined that the FPDS-NG data were sufficiently reliable for the purposes of this review by comparing the information for selected agencies with information from other sources, including agency contract data and information in contract files at selected locations. As part of this analysis, we determined the amount of Recovery Act obligations under new and existing contract vehicles, as reported in FPDS-NG. Actions on the same underlying contract were grouped together; orders and modifications to contracts awarded after enactment of the Recovery Act were counted as occurring under new contracts, while orders and modifications to contracts that predated the Recovery Act were counted as existing contracts. For our second and third objectives, we used FPDS-NG data to select five agencies for more extensive review: Department of Defense (DOD) Department of Energy (DOE) Department of Health and Human Services (HHS) National Aeronautics and Space Administration (NASA) Small Business Administration (SBA) These agencies were identified on the basis of the volume, dollar value, and percentage of noncompetitive contract actions on which they obligated Recovery Act funds, according to data drawn from FPDS-NG on February 19, 2010. The size of the agencies was also considered. Within each of the five agencies, we selected one contracting office at which we reviewed contract files for noncompetitive Recovery Act contract actions. As with the agencies, we chose these locations based on the volume, dollar value, and percentage of noncompetitive Recovery Act contract actions. The types of contract awards made at each location were also considered. The five contracting offices selected were the U.S. Army Corps of Engineers (USACE) Sacramento District at DOD, the Office of Environmental Management Consolidated Business Center at DOE, the National Institutes of Health (NIH) at HHS, the Johnson Space Center at NASA, and the Office of Business Operations at SBA. At each contracting office, we reviewed all noncompetitive contract actions awarded or issued using Recovery Act funds, about 150 actions in total. Because GAO and others have previously identified shortcomings in FPDS-NG, we also asked agency officials to verify the accuracy and completeness of our lists of noncompetitive contract actions before our site visits. For each contract file, we reviewed basic information on the contract award, such as the obligation amount, as well as information on the award process, such as the reason the contract was awarded noncompetitively. These reviews were conducted on-site, except for that of NASA’s Johnson Space Center, for which we reviewed electronic versions of the contract files. We also interviewed agency contracting officials at each location regarding issues related to the contract files included in our review as well as contracting under the Recovery Act as a whole. In addition, using FPDS-NG data, we identified all new Recovery Act contracts at the selected agencies that required documented justifications and approvals authorizing the use of a noncompetitive contracting approach, as of February 19, 2010. We limited our search to new contracts with an award type of “Definitive Contract” in FPDS-NG, and selected for review all those where the amount obligated exceeded typical thresholds for requiring a documented justification—$3.5 million for contracts with 8(a) program businesses, and $100,000 in most other cases. For each of the contracts, we obtained and reviewed materials from the contract files related to the justification for the noncompetitive award. For each of the five selected federal agencies, we gathered information on Recovery Act contracting oversight from interviews with relevant officials, and reviews of relevant policies, reports, and other documents. We obtained similar information from the agencies’ inspectors general (IG), including their audit plans related to Recovery Act contracting. We also reviewed and analyzed applicable findings the IGs have made regarding management and oversight of Recovery Act contracting. To determine the level of insight that state officials have into the use of noncompetitive Recovery Act contracts, we selected five states— California, Colorado, Florida, New York, and Texas—based on the amount of Recovery Act funds reported as being awarded via contracts on www.Recovery.gov and our goal of providing information on a variety of geographic locations. These states account for more than half of the Recovery Act funds awarded by contract at the state level for the 16 states that we are monitoring as part of our mandatory reporting on Recovery Act issues. For each state, we discussed with the appropriate state officials—including representatives from the governors offices, state procurement offices, and audit organizations—the extent to which the states have awarded noncompetitive Recovery Act contracts, the reasons why they did not use competition, and the level of oversight the states provide for these contracts. Additionally, we discussed these issues with representatives of the state agencies that manage the education and weatherization programs to obtain further understanding of how state agencies award and oversee contracts. It is important to note that states are not required to follow federal acquisition regulations, including those covering the award of noncompetitive contracts. We conducted this performance audit from February 2010 to July 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix V: Comments from the Department of Energy Inspector General Mr. James Fuquay Assistant Director Government Accountability Office Via email: (fuquayj@gao.gov). Subject: Comments on the Draft Government Accountability Office Report: RECOVERY ACT: Contracting Approaches and Oversight at Selected Federal Agencies and States (GAO-10-809) The Office of Inspector General appreciates the opportunity to comment on the subject report. As we explained during our meetings with GAO officials and as recognized in your draft report, the Office of Inspector General employs a risk-based approach in determining how to best use taxpayer furnished resources. For the Recovery Act, as with all funds appropriated to the U.S. Department of Energy, we consider a number of factors in determining where to apply our scarce audit resources, not the least of which is the form and substance of the contracting vehicle employed. The U.S. Department of Energy is one of the most contractor dependent agencies in the government. As a result, we incorporate the examination of applicable contract instruments into each of our audits. Regarding our Recovery Act strategy, we considered the subject area during the completion of our risk assessment. However, as your report states, we did not identify contracting as a Recovery Act high risk area. One of the primary reasons was that the Department used a significant portion of its Recovery Act funds to award grants, with virtually all of the remainder dedicated to accelerating approved scopes of work on existing contracts. As GAO notes in the Appendix to its draft, less than one percent of funding was devoted to newly awarded contracts, including those awarded to 8(a) firms. By virtually every reasonable test, such amount is immaterial to the more than $38 billion in Recovery Act funding received by the Department. With respect to use of Recovery Act funding, the GAO is correct in stating that OIG spending has not reached anticipated levels. However, the draft report fails to recognize that this was directly tied to delays in the Department’s program start/scale-up. As we have identified in recently issued and several in-progress reviews, significant spending by the Department on a number of major Recovery Act projects/activities had only recently begun. As of June 30, 2010, however, we had obligated about $6.2 million and expended over $1.7 million of the $15 million we were provided in Recovery Act funds. We anticipate that our spending rate will significantly increase in the near term as the OIG is currently using contract independent public accountants and Federal Recovery Act specific employees to provide support for a significant number of audits at the state and local level. Finally, the report recommends that as we revisit and revise our Recovery Act audit plans, that we should assess the need for allocating an appropriate level of audit resources, as determined using our risk-based analysis, to non-competitive contracts awarded under the 8(a) program. We do not disagree with the fundamental premise of the recommendation; however, we do not believe that the facts in this case provide a basis for it. As a matter of practice, we routinely consider contracts of this nature and have completed a number of audits in this area in the past. In fact, our Fiscal Year 2011 plan includes an audit start in this very area. Should you have questions or desire to discuss the contents of our response, please contact me at 202-586-1949. In addition to the contact named above, William T. Woods, Director; James Fuquay, Assistant Director; Shea Bader; Noah Bleicher; M. Greg Campbell; MacKenzie Cooper; Alexandra Dew; R. Eli DeVan; Kevin Heinz; W. Keith Hudson; Julia Kennon; Jean K. Lee; Teague Lyons; Jean McSween; Norm Rabkin; Morgan Delaney Ramaker; and Russ Reiter all made contributions to this report.
The American Recovery and Reinvestment Act of 2009 (Recovery Act), estimated to cost $862 billion over 10 years, is intended to stimulate the economy and create jobs. The Recovery Act provides funds to federal agencies and states, which in turn may award contracts to private companies and other entities to carry out the purposes of the Recovery Act. Contracts using Recovery Act funds are required to be awarded competitively to the maximum extent practicable. GAO was asked to examine the use and oversight of noncompetitive Recovery Act contracts at the federal and state levels. GAO determined (1) the extent that federal contracts were awarded noncompetitively; (2) the reasons five selected federal agencies (the Departments of Defense, Energy, and Health and Human Services; the National Aeronautics and Space Administration; and the Small Business Administration (SBA)) awarded noncompetitive contracts; (3) the oversight these agencies and their inspectors general (IG) provide for Recovery Act contracts; and (4) the level of insight five selected states (California, Colorado, Florida, New York, and Texas) have into the use of noncompetitive Recovery Act contracts. More than two-thirds of the $26 billion obligated for Recovery Act federal contract actions through May 2010 were on contracts that were in place before the enactment of the Recovery Act. Most of these contracts had been awarded competitively. For new federal Recovery Act contract actions, 89 percent of the dollars were obligated on competed actions. Most of the Recovery Act dollars obligated noncompetitively on new contract actions went to socially and economically disadvantaged small businesses under SBA's 8(a) program. The goal of using Recovery Act funds quickly on high-priority projects drove the contracting approaches of the five federal agencies, particularly their use of existing contracts. Officials explained that whether an existing contract had been competed originally did not influence the decision to use a pre-existing contract because the level of competition had been established before Recovery Act funds were available. The selected federal agencies implemented additional review processes, internal reporting, and coordination efforts for the Recovery Act. Some IGs for these agencies focused initial Recovery Act oversight on areas the IGs considered to be higher risk than contracts, such as grant programs. The IG reviews to date have not focused specifically on contracting, including the use of noncompetitive awards to 8(a) program businesses. GAO's recent reviews of the 8(a) program, however, have found that safeguards for ensuring that only eligible firms receive 8(a) contracts may not be working as intended. The five states varied on the type and amount of data routinely collected on noncompetitive Recovery Act contracts. GAO could not determine the full extent to which such contracts are being used. The states generally rely on their pre-Recovery Act contracting policies and procedures, which generally require competition. The states do not routinely provide state-level oversight of contracts awarded at the local level, where a portion of Recovery Act contracting occurs. Officials from the selected states' audit organizations said that if they were to address Recovery Act contracting issues, it could be done through the annual Single Audit or other reviews of programs that involve Recovery Act funds. GAO recommends that the five IGs assess the need to allocate audit resources to noncompetitive 8(a) Recovery Act contracts. The IGs concurred or had no comment.
Passenger screening is a process by which screeners inspect individuals and their property to deter and prevent an act of violence or air piracy, such as the carrying of any unauthorized explosive, incendiary, weapon, or other prohibited item on board an aircraft or into a sterile area. Screeners inspect individuals for prohibited items at designated screening locations. TSA developed standard operating procedures for screening passengers at airport checkpoints. Primary screening is conducted on all airline passengers before they enter the sterile area of an airport and involves passengers walking through a metal detector and carry-on items being subjected to X-ray screening. Passengers who alarm the walk-through metal detector or are designated as selectees—that is, passengers selected for additional screening—must then undergo secondary screening, as well as passengers whose carry-on items have been identified by the X-ray machine as potentially containing prohibited items. Secondary screening involves additional means for screening passengers, such as by hand- wand; physical pat-down; or, at certain airport locations, an explosives trace portal (ETP), which is used to detect traces of explosives on passengers by using puffs of air to dislodge particles from their bodies and clothing into an analyzer. Selectees’ carry-on items are also physically searched or screened for explosives, such as by using explosives trace detection machines. Federal agencies—particularly NCTC and the FBI—submit to TSC nominations of individuals to be included on the consolidated watchlist. For example, NCTC receives terrorist-related information from executive branch departments and agencies, such as the Department of State, the Central Intelligence Agency, and the FBI, and catalogs this information in its Terrorist Identities Datamart Environment database, commonly known as the TIDE database. This database serves as the U.S. government’s central classified database with information on known or suspected international terrorists. According to NCTC, agencies submit watchlist nomination reports to the center, but are not required to specify individual screening systems that they believe should receive the watchlist record, such as the No Fly list of individuals who are to be denied boarding an aircraft. NCTC is to presume that agency nominations are valid unless it has other information in its possession to rebut that position. To decide if a person poses enough of a threat to be placed on the watchlist, agencies are to follow Homeland Security Presidential Directive (HSPD) 6, which states that the watchlist is to contain information about individuals “known or appropriately suspected to be or have been engaged in conduct constituting, in preparation for, in aid of, or related to terrorism.” HSPD-24 definitively established the “reasonable suspicion” standard for watchlisting by providing that agencies are to make available to other agencies all biometric information associated with “persons for whom there is an articulable and reasonable basis for suspicion that they pose a threat to national security.” NCTC is to consider information from all available sources and databases to determine if there is a reasonable suspicion of links to terrorism that warrants a nomination, which can involve some level of subjectivity. The guidance on determining reasonable suspicion, which TSC most recently updated in February 2009, contains specific examples of the types of terrorism-related conduct that may make an individual appropriate for inclusion on the watchlist. The White House’s review of the December 25 attempted terrorist attack noted that Mr. Abdulmutallab’s father met with U.S. Embassy officers in Abuja, Nigeria, to discuss his concerns that his son may have come under the influence of unidentified extremists and had planned to travel to Yemen. However, according to NCTC, the information in the State Department’s nomination report did not meet the criteria for watchlisting in TSC’s consolidated terrorist screening database per the government’s established and approved nomination standards. NCTC also noted that the State Department cable nominating Mr. Abdulmutallab had no indication that the father was the source of the information. According to the White House review of the December 25 attempted attack, the U.S. government had sufficient information to have uncovered and potentially disrupted the attack—including by placing Mr. Abdulmutallab on the No Fly list—but analysts within the intelligence community failed to connect the dots that could have identified and warned of the specific threat. After receiving the results of the White House’s review of the December 25 attempted attack, the President called for members of the intelligence community to undertake a number of corrective actions—such as clarifying intelligence agency roles, responsibilities, and accountabilities to document, share, and analyze all sources of intelligence and threat threads related to terrorism, and accelerating information technology enhancements that will help with information correlation and analysis. The House Committee on Oversight and Government Reform has asked us, among other things, to assess government efforts to revise the watchlist process, including actions taken related to the December 25 attempted attack. As part of our monitoring of high-risk issues, we also have ongoing work— at the request of the Senate Committee on Homeland Security and Governmental Affairs—that is assessing agency efforts to create the Information Sharing Environment, which is intended to break down barriers to sharing terrorism-related information, especially across federal agencies. Our work is designed to help ensure that federal agencies have a road map that defines roles, responsibilities, actions, and time frames for removing barriers, as well as a system to hold agencies accountable to the Congress and the public for making progress on these efforts. Among other things, this road map can be helpful in removing cultural, technological, and other barriers that lead to agencies maintaining information in stove-piped systems so that it is not easily accessible, similar to those problems that the December 25 attempted attack exposed. We expect to issue the results of this work later this year. Following the December 25 attempted terrorist attack, questions were raised as to what could have happened if Mr. Abdulmutallab had been on TSC’s consolidated terrorist screening database. We created several scenarios to help explain how the watchlist process is intended to work and what opportunities agencies could have had to identify him if he was on the watchlist. For example, according to TSC, if a record from the terrorist screening database is sent to the State Department’s system and the individual in that record holds a valid visa, TSC would compare the identifying information in the watchlist record against identifying information in the visa and forward positive matches to the State Department for possible visa revocation. If an individual’s visa is revoked, under existing procedures, this information is to be entered into the database CBP uses to screen airline passengers prior to their boarding, which we describe below. According to CBP, when the individual checks in for a flight, the on-site CBP Immigration Advisory Program officers already would have been apprised of the visa revocation by CBP and they would have checked the person’s travel documents to verify that the individual was a match to the visa revocation record. Once the positive match was established, the officers would have recommended that he not be allowed to board the flight. Under another scenario, if an individual is on TSC’s terrorist screening database, existing processes provide CBP with the opportunity to identify the subject of a watchlist record as part of the checks CBP is to conduct to see if airline passengers are eligible to be admitted into the country. Specifically, for international flights departing to or from the United States (but not for domestic flights), CBP is to receive information on passengers obtained, for example, when their travel document is swiped. CBP is to check this passenger information against a number of databases to see if there are any persons who have immigration violations, criminal histories, or any other reason for being denied entry to the country, in accordance with the agency’s mission. According to CBP, when it identifies a U.S. bound passenger who is on the watchlist, it coordinates with other federal agencies to evaluate the totality of available information to see what action is appropriate. In foreign airports where there is a CBP Immigration Advisory Program presence, the information on a watchlisted subject is forwarded by CBP to program officers onsite. The officers would then intercept the subject prior to boarding the aircraft and confirm that the individual is watchlisted, and when appropriate based on the derogatory information, request that the passenger be denied boarding. In a third scenario, if an individual is on the watchlist and is also placed on the No Fly or Selectee list, when the person checks in for a flight, the individual’s identifying information is to be checked against these lists. Individuals matched to the No Fly list are to be denied boarding. If the individual is matched to the Selectee list, the person is to be subject to further screening, which could include physical screening, such as a pat- down. The criteria in general that are used to place someone on either of these two lists include the following: Persons who are deemed to be a threat to civil aviation or national security and should be precluded from boarding an aircraft are put on the No Fly list. Persons who are deemed to be a threat to civil aviation or national security but do not meet the criteria of the No Fly list are placed on the Selectee list and are to receive additional security screening prior to being permitted to board an aircraft. The White House Homeland Security Council devised these more stringent sets of criteria for the No Fly and Selectee lists in part because these lists are not intended as investigative or information-gathering tools or tracking mechanisms, and TSA is a screening but not an intelligence agency. Rather, the lists are intended to help ensure the safe transport of passengers and facilitate the flow of commerce. However, the White House’s review of the December 25 attempted terrorist attack raised questions about the effectiveness of the criteria, and the President tasked the FBI and TSC with developing recommendations for any needed changes to the nominations guidance and criteria. Weighing and responding to the potential impacts that changes to the nominations guidance and criteria could have on the traveling public and the airlines will be important considerations in developing such recommendations. In September 2006, we reported that tens of thousands of individuals who had similar names to persons on the watchlist were being misidentified and subjected to additional screening, and in some cases delayed so long as to miss their flights. We also reported that resolving these misidentifications can take time and, therefore, affect air carriers and commerce. If changes in criteria result in more individuals being added to the lists, this could also increase the number of individuals who are misidentified, exacerbating these negative effects. In addition, we explained that individuals who believe that they have been inappropriately matched to the watchlist can petition the government for action and the relevant agencies must conduct research and work to resolve these issues. If more people are misidentified, more people may trigger this redress process, increasing the need for resources. Finally, any changes to the criteria or process would have to ensure that watchlist records are used in a manner that safeguards legal rights, including freedoms, civil liberties, and information privacy guaranteed by federal law. In reacting to the December 25 attempted terrorist attack, determining whether there were potential vulnerabilities related to the use of watchlist records when screening—not only individuals who fly into the country but also, for example, those who cross land borders—are important considerations. Screening agencies whose missions most frequently and directly involve interactions with travelers generally do not check against all records in the consolidated terrorist watchlist. In our October 2007 report, we noted that this is because screening against certain records may not be needed to support a respective agency’s mission or may not be possible because of computer system limitations, among other things. For example, CBP’s mission is to determine if any traveler is eligible to enter the country or is to be denied entry because of immigration or criminal violations. As such, CBP’s computer system accepts all records from the consolidated watchlist database that have either a first name or a last name and one other identifier, such as a date of birth. Therefore, TSC sends CBP the greatest number of records from the consolidated watchlist database for its screening. In contrast, one of the State Department’s missions is to approve requests for visas. Since only non-U.S. citizens and nonlawful permanent residents apply for visas, TSC does not send the department records on citizens or lawful permanent residents for screening visa applicants. Also, the FBI database that state and local law enforcement agencies use for their missions in checking individuals for criminal histories, for example, also receives a smaller portion of the watchlist. According to the FBI, its computer system requires a full first name, last name, and other identifier, typically a date of birth. The FBI noted that this is because having these identifiers helps to reduce the number of times an individual is misidentified as being someone on the list, and the computer system would not be effective in making matches without this information. Finally, the No Fly and Selectee lists collectively contain the lowest percentage of watchlist records because the remaining ones either do not meet the nominating criteria, as described above, or do not meet system requirements—that is, include full names and dates of birth, which TSA stated are required to minimize misidentifications. TSA is implementing a new screening program that the agency states will have the capability to screen an individual against the entire watchlist. Under this program, called Secure Flight, TSA will assume from air carriers the responsibility of comparing passenger information against the No Fly and Selectee lists. According to the program’s final rule, in general, Secure Flight is to compare passenger information only to the No Fly and Selectee lists. The supplementary information accompanying the rule notes that this will be satisfactory to counter the security threat during normal security circumstances. However, the rule provides that TSA may use the larger set of watchlist records when warranted by security considerations, such as if TSA learns that flights on a particular route may pose increased risks. TSA emphasized that use of the full terrorist screening database is not routine. Rather, TSA noted that its use is limited to circumstances in which there is information concerning an increased risk to transportation security, and the decision to use the full watchlist database will be based on circumstances at the time. According to TSA, as of January 2010, the agency was developing administrative procedures for utilizing the full watchlist when warranted. In late January 2009, TSA began to assume from airlines the watchlist matching function for a limited number of domestic flights, and has since phased in additional flights and airlines. TSA expects to assume the watchlist matching function for all domestic and international flights departing to and from the United States by December 2010. It is important to note that under the Secure Flight program, TSA requires airlines to provide the agency with each passenger’s full name and date of birth to facilitate the watchlist matching process, which should reduce the number of individuals who are misidentified as the subject of a watchlist record. We continue to monitor the Secure Flight program at the Congress’s request. In our October 2007 watchlist report, we recommended that the FBI and DHS assess the extent to which security risks exist by not screening against certain watchlist records and what actions, if any, should be taken in response. The agencies generally agreed with our recommendations but noted that the risks related to not screening against all watchlist records needs to be balanced with the impact of screening against all records, especially those records without a full name and other identifiers. For example, more individuals could be misidentified, law enforcement would be put in the position of detaining more individuals until their identities could be resolved, and administrative costs could increase, without knowing what measurable increase in security is achieved. While we acknowledge these tradeoffs and potential impacts, we maintain that assessing whether vulnerabilities exist by not screening against all watchlist records—and if there are ways to limit impacts—is critical and could be a relevant component of the government’s ongoing review of the watchlist process. Therefore, we believe that our recommendation continues to have merit. As we reported in October 2007, the federal government has made progress in using the consolidated terrorist watchlist for screening purposes, but has additional opportunities to use the list. For example, DHS uses the list to screen employees in some critical infrastructure components of the private sector, including certain individuals who have access to vital areas of nuclear power plants or transport hazardous materials. However, many critical infrastructure components are not using watchlist records, and DHS has not finalized guidelines to support such private sector screening, as HSPD-6 mandated and we previously recommended. In that same report, we noted that HSPD-11 tasked the Secretary of Homeland Security with coordinating across other federal departments to develop (1) a strategy for a comprehensive and coordinated watchlisting and screening approach and (2) a prioritized implementation and investment plan that describes the scope, governance, principles, outcomes, milestones, training objectives, metrics, costs, and schedule of necessary activities. We reported that without such a strategy, the government could not provide accountability and a basis for monitoring to ensure that (1) the intended goals for, and expected results of, terrorist screening are being achieved and (2) use of the watchlist is consistent with privacy and civil liberties. We recommended that DHS develop a current interagency strategy and related plans. According to DHS’s Screening Coordination Office, during the fall of 2007, the office led an interagency effort to provide the President with an updated report, entitled, HSPD-11, An Updated Strategy for Comprehensive Terrorist-Related Screening Procedures. The office noted that the report was formally submitted to the Executive Office of the President through the Homeland Security Council and reviewed by the President on January 25, 2008. Further, the office noted that it also provided a sensitive version of the report to the Congress in October 2008. DHS provided us an excerpt of that report to review, stating that it did not have the authority to share excerpts provided by other agencies, and we were unable to obtain a copy of the full report. The information we reviewed only discussed DHS’s own efforts for coordinating watchlist screening across the department. Therefore, we were not able to determine whether the HSPD-11 report submitted to the President addressed all of the components called for in the directive or what action, if any, was taken as a result. We maintain that a comprehensive strategy, as well as related implementation and investment plans, as called for by HSPD-11, continue to be important to ensure effective governmentwide use of the watchlist process. In addition, in our October 2007 report, we noted that establishing an effective governance structure as part of this strategic approach is particularly vital since numerous agencies and components are involved in the development, maintenance, and use of the watchlist process, both within and outside of the federal government. Also, establishing a governance structure with clearly-defined responsibility and authority would help to ensure that agency efforts are coordinated, and that the federal government has the means to monitor and analyze the outcomes of such efforts and to address common problems efficiently and effectively. We determined at the time that no such structure was in place and that no existing entity clearly had the requisite authority for addressing interagency issues. We recommended that the Homeland Security Council ensure that a governance structure was in place, but the council did not comment on our recommendation. At the time of our report, TSC stated that it had a governance board in place, comprised of senior-level agency representatives from numerous departments and agencies. However, we also noted that the board provided guidance concerning issues within TSC’s mission and authority. We also stated that while this governance board could be suited to assume more of a leadership role, its authority at that time was limited to TSC- specific issues, and it would need additional authority to provide effective coordination of terrorist-related screening activities and interagency issues governmentwide. In January 2010, the FBI stated that TSC has a Policy Board in place, with representatives from relevant departments and agencies, that reviews and provides input to the government’s watchlist policy. The FBI also stated that the policies developed are then sent to the National Security Council Deputies Committee (formerly the Homeland Security Council) for ratification. The FBI noted that this process was used for making the most recent additions and changes to watchlist standards and criteria. We have not yet been able to determine, however, whether the Policy Board has the jurisdiction and authority to resolve issues beyond TSC’s purview, such as issues within the intelligence community and in regard to the nominations process, similar to the types of interagency issues the December 25 attempted attack identified. We maintain that a governance structure with the authority for and accountability over the entire watchlist process, from nominations through screening, and across the government is important. On January 7, 2010, the President tasked the National Security Staff with initiating an interagency review of the watchlist process—including the business processes, procedures, and criteria—and the interoperability and sufficiency of supporting information technology systems. This review offers the government an opportunity to develop an updated strategy, related plans, and governance structure that would provide accountability to the administration, the Congress, and the American public that the watchlist process is effective at helping to secure the homeland. As we reported in October 2009, in an effort to improve the capability to detect explosives at aviation passenger checkpoints, TSA has 10 passenger screening technologies in various phases of research, development, procurement, and deployment, including the AIT (formerly Whole Body Imager). TSA is evaluating the AIT as an improvement over current screening capabilities of the metal detector and pat-downs specifically to identify nonmetallic threat objects and liquids. The AITs produce an image of a passenger’s body that a screener interprets. The image identifies objects, or anomalies, on the outside of the physical body but does not reveal items beneath the surface of the skin, such as implants. TSA plans to procure two types of AIT units: one type uses millimeter wave and the other type uses backscatter X-ray technology. Millimeter wave technology beams millimeter wave radio frequency energy over the body’s surface at high speed from two antennas simultaneously as they rotate around the body. The energy reflected back from the body or other objects on the body is used to construct a three-dimensional image. Millimeter wave technology produces an image that resembles a fuzzy photo negative. Backscatter X-ray technology uses a low-level X-ray to create a two-sided image of the person. Backscatter technology produces an image that resembles a chalk etching. As we reported in October 2009, TSA has not yet deployed any new technologies nationwide. However, as of December 31, 2009, according to a senior TSA official, the agency has deployed 40 of the millimeter wave AITs, and has procured 150 backscatter X-ray units in fiscal year 2009 and estimates that these units will be installed at airports by the end of calendar year 2010. In addition, TSA plans to procure an additional 300 AIT units in fiscal year 2010, some of which will be purchased with funds from the American Recovery and Reinvestment Act of 2009. TSA plans to procure and deploy a total of 878 units at all category X through category IV airports. Full operating capability is expected in fiscal year 2014. TSA officials stated that the cost of the AIT is about $130,000 to $170,000 per unit, excluding installation costs. In addition, the estimated training costs are $50,000 per unit. While TSA stated that the AIT will enhance its explosives detection capability, because the AIT presents a full body image of a person during the screening process, concerns have been expressed that the image is an invasion of privacy. According to TSA, to protect passenger privacy and ensure anonymity, strict privacy safeguards are built into the procedures for use of the AIT. For example, the officer who assists the passenger never sees the image that the technology produces, and the officer who views the image is remotely located in a secure resolution room and never sees the passenger. Officers evaluating images are not permitted to take cameras, cell phones, or photo-enabled devices into the resolution room. To further protect passengers’ privacy, ways have been introduced to blur the passengers’ images. The millimeter wave technology blurs all facial features, and the backscatter X-ray technology has an algorithm applied to the entire image to protect privacy. Further, TSA has stated that the AIT’s capability to store, print, transmit, or save the image will be disabled at the factory before the machines are delivered to airports, and each image is automatically deleted from the system after it is cleared by the remotely located security officer. Once the remotely located officer determines that threat items are not present, that officer communicates wirelessly to the officer assisting the passenger. The passenger may then continue through the security process. Potential threat items are resolved through a direct physical pat-down before the passenger is cleared to enter the sterile area. In addition to privacy concerns, the AITs are large machines, and adding them to the checkpoint areas will require additional space, especially since the operators are segregated from the checkpoint to help ensure passenger privacy. We previously reported on several challenges TSA faces related to the research, development, and deployment of passenger checkpoint screening technologies and made a number of recommendations to improve this process. Two of these recommendations are particularly relevant today, as TSA moves forward with plans to install a total of 878 additional AITs—completing operational testing of technologies in airports prior to using them in day-to-day operations and assessing whether technologies such as the AIT are vulnerable to terrorist countermeasures, such as hiding threat items on various parts of the body to evade detection. First, in October 2009, we reported that TSA had relied on technologies in day-to-day airport operations that had not been proven to meet their functional requirements through operational testing and evaluation, contrary to TSA’s acquisition guidance and a knowledge-based acquisition approach. We also reported that TSA had not operationally tested the AITs at the time of our review, and we recommended that TSA operationally test and evaluate technologies prior to deploying them. In commenting on our report, TSA agreed with this recommendation. A senior TSA offici stated that although TSA does not yet have a written policy requiring operational testing prior to deployment, TSA is now including in its contracts with vendors that checkpoint screening machines are required to successfully complete laboratory tests as well as operational tests. The test results are then incorporated in the source selection plan. The official also stated that the test results are now required at key decision points by DHS’s Investment Review Board. While recently providing GAO with updated information to our October 2009 report, TSA stated that operational testing for the AIT was completed as of the end of calendar year 2009. We are in the process of verifying that TSA has tested all of the AIT’s functional requirements in an operational environment. Deploying technologies that have not successfully completed operational testing and evaluation can lead to cost overruns and underperformance. TSA’s procurement guidance provides that testing should be conducted in an operational environment to validate that the system meets all functional requirements before deployment. In addition, our reviews have shown that leading commercial firms follow a knowledge-based approach to major acquisitions and do not proceed with large investments unless the product’s design demonstrates its ability to meet functional requirements and be stable. The developer must show that the product can be manufactured within cost, schedule, and quality targets and is reliable before production begins and the system is used in day-to-day operations. TSA’s experience with the ETPs, which the agency uses for secondary screening, demonstrates the importance of testing and evaluation in an operational environment. The ETP detects traces of explosives on a passenger by using puffs of air to dislodge particles from the passenger’s body and clothing that the machine analyzes for traces of explosives. TSA procured 207 ETPs and in 2006 deployed 101 ETPs to 36 airports, the first deployment of a checkpoint technology initiated by the agency. TSA deployed the ETPs even though agency officials were aware that tests conducted during 2004 and 2005 on earlier ETP models suggested that they did not demonstrate reliable performance. Furthermore, the ETP models that were subsequently deployed were not first tested to prove their effective performance in an operational environment, contrary to TSA’s acquisition guidance, which recommends such testing. As a result, TSA procured and deployed ETPs without assurance that they would perform as intended in an operational environment. TSA officials stated that they deployed the machines without resolving these issues to respond quickly to the threat of suicide bombers. In June 2006, TSA halted further deployment of the ETP because of performance, maintenance, and installation issues. According to a senior TSA official, as of December 31, 2009, all but 9 ETPs have been withdrawn from airports and 18 ETPs remain in inventory. TSA estimates that the 9 remaining ETPs will be removed from airports by the end of calendar year 2010. In the future, using validated technologies would enhance TSA’s efforts to improve checkpoint security. Furthermore, retaining existing screening procedures until the effectiveness of future technologies has been validated could provide assurances that use of checkpoint technologies improves aviation security. Second, as we reported in October 2009, TSA does not know whether its explosives detection technologies, such as the AITs, are susceptible to terrorist tactics. Although TSA has obtained information on vulnerabilities at the screening checkpoint, the agency has not assessed vulnerabilities— that is, weaknesses in the system that terrorists could exploit in order to carry out an attack—related to passenger screening technologies, such as AITs, that are currently deployed. According to TSA’s threat assessment, terrorists have various techniques for concealing explosives on their persons, as was evident in Mr. Abdulmutallab’s attempted attack on December 25, where he concealed an explosive in his underwear. However, TSA has not assessed whether these and other tactics that terrorists could use to evade detection by screening technologies, such as AIT, increase the likelihood that the screening equipment would not detect the hidden weapons or explosives. Thus, without an assessment of the vulnerabilities of checkpoint technologies, it is unclear whether the AIT or other technologies would have been able to detect the weapon Mr. Abdulmutallab used in his attempted attack. TSA is in the process of developing a risk assessment for the airport checkpoints, but the agency has not yet completed this effort or clarified the extent to which this effort addresses any specific vulnerabilities in checkpoint technology. TSA officials stated that to identify vulnerabilities at airport checkpoints, the agency analyzes information such as the results from its covert testing program. TSA conducts national and local covert tests, whereby individuals attempt to enter the secure area of an airport through the passenger checkpoint with prohibited items in their carry-on bags or hidden on their persons. However, TSA’s covert testing programs do not systematically test passenger and baggage screening technologies nationwide to ensure that they identify the threat objects and materials the technologies are designed to detect, nor do the covert testing programs identify vulnerabilities related to these technologies. We reported in August 2008 that while TSA’s local covert testing program attempts to identify test failures that may be caused by screening equipment not working properly or caused by screeners and the screening procedures they follow, the agency’s national testing program does not attribute a specific cause of a test failure. We recommended, among other things, that TSA require the documentation of specific causes of all national covert testing failures, including documenting failures related to equipment, in the covert testing database to help TSA better identify areas for improvement. TSA concurred with this recommendation and stated that the agency will expand the covert testing database to document test failures related to screening equipment. In our 2009 report, we also recommended that the Assistant Secretary for TSA, among other actions, conduct a complete risk assessment—including threat, vulnerability, and consequence assessment—for the passenger screening program and incorporate the results into TSA’s program strategy, as appropriate. TSA and DHS concurred with our recommendation, but have not completed these risk assessments or provided documentation to show how they have addressed the concerns raised in our 2009 report regarding the susceptibility of the technology to terrorist tactics. Mr. Chairman, this concludes our statement for the record. For additional information on this statement, please contact Eileen Larence at (202) 512-6510 or larencee@gao.gov or Stephen Lord at (202) 512-4379 or lords@gao.gov. In addition to the contacts named above, Kathryn Bernet, Carissa Bryant, Frances Cook, Joe Dewechter, Eric Erdman, Richard Hung, Anne Laffoon, Linda Miller, Victoria Miller, and Michelle Woods made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The December 25, 2009, attempted bombing of flight 253 raised questions about the federal government's ability to protect the homeland and secure the commercial aviation system. This statement focuses on the government's efforts to use the terrorist watchlist to screen individuals and determine if they pose a threat, and how failures in this process contributed to the December 25 attempted attack. This statement also addresses the Transportation Security Administration's (TSA) planned deployment of technologies for enhanced explosive detection and the challenges associated with this deployment. GAO's comments are based on products issued from September 2006 through October 2009 and selected updates in January 2010. For these updates, GAO reviewed government reports related to the December 25 attempted attack and obtained information from the Department of Homeland Security (DHS) and TSA on use of the watchlist and new technologies for screening airline passengers. The intelligence community uses standards of reasonableness to evaluate individuals for nomination to the consolidated terrorist watchlist. In making these determinations, agencies are to consider information from all available sources. However, for the December 25 subject, the intelligence community did not effectively complete these steps and link available information to the subject before the incident. Therefore, agencies did not nominate the individual to the watchlist or any of the subset lists used during agency screening, such as the "No Fly" list. Weighing and responding to the potential impacts that changes to the nomination criteria would have on the traveling public will be an important consideration in determining what changes may be needed. Also, screening agencies stated that they do not check against all records in the watchlist, partly because screening against certain records may not be needed to support a respective agency's mission or may not be possible because of the requirements of computer programs used to check individuals against watchlist records. In October 2007, GAO reported that not checking against all records may pose a security risk and recommended that DHS and the FBI assess potential vulnerabilities, but they have not completed these assessments. TSA is implementing an advanced airline passenger prescreening program--known as Secure Flight--that could potentially result in the federal government checking passengers against the entire watchlist under certain security conditions. Further, the government lacks an up-to-date strategy and implementation plan--supported by a clearly defined leadership or governance structure--which are needed to enhance the effectiveness of terrorist-related screening and ensure accountability. In the 2007 report, GAO recommended that the Homeland Security Council ensure that a governance structure exists that has the requisite authority over the watchlist process. The council did not comment on this recommendation. As GAO reported in October 2009, since TSA's creation, 10 passenger screening technologies have been in various phases of research, development, procurement, and deployment, including the Advanced Imaging Technology (AIT)--formerly known as the Whole Body Imager. TSA expects to have installed almost 200 AITs in airports by the end of calendar year 2010 and plans to install a total of 878 units by the end of fiscal year 2014. In October 2009, GAO reported that TSA had not yet conducted an assessment of the technology's vulnerabilities to determine the extent to which a terrorist could employ tactics that would evade detection by the AIT. Thus, it is unclear whether the AIT or other technologies would have detected the weapon used in the December 25 attempted attack. GAO's report also noted the problems TSA experienced in deploying another checkpoint technology that had not been tested in the operational environment. Since GAO's October report, TSA stated that it has completed the testing as of the end of 2009. We are currently verifying that all functional requirements of the AIT were tested in an operational environment. Completing these steps should better position TSA to ensure that its costly deployment of AIT machines will enhance passenger checkpoint security.
Foreign science students and scholars generally begin the visa process by scheduling a visa interview. On the day of the appointment, a consular officer reviews the application, checks the applicant’s name in the Consular Lookout and Support System (CLASS), takes the applicant’s digital fingerprints and photograph, and interviews the applicant. Based on the interview and a review of pertinent documents, the consular officer determines if the applicant is eligible for nonimmigrant status under the 1952 Immigration and Nationality Act (INA). If the consular officer determines that the applicant is eligible to receive a visa, the applicant is notified right away and he or she usually receives the visa within 24 hours. In some cases, the consular officer decides that the applicant will need a Security Advisory Opinion (SAO), a response from Washington on whether to issue a visa to the applicant. SAOs are required for a number of reasons, including concerns that a visa applicant may engage in illegal transfer of sensitive technology. An SAO based on sensitive technology transfer concerns is known as Visas Mantis and, according to State officials, is the most common type of SAO applied to science applicants. It is also the most common type of SAO sent from the posts we visited in China, as well as in Kiev, Ukraine. The Visas Mantis process is designed to further four important national security objectives: prevent the proliferation of weapons of mass destruction and their restrain the development of destabilizing conventional military capabilities in certain regions of the world; prevent the transfer of arms and sensitive dual-use items to terrorists and states that sponsor terrorism; and maintain U.S. advantages in certain militarily critical technologies. The Visas Mantis process has several steps and involves multiple U.S. agencies (see fig. 1). In deciding if a Visas Mantis check is needed, the consular officer determines whether the applicant’s background or proposed activity in the United States could involve exposure to technologies on the Technology Alert List (TAL). The list, published by the State Department in coordination with the interagency community and based on U.S. export control laws, includes science and technology-related fields where, if knowledge gained from research or work in these fields were used against the United States, it could potentially be harmful. If a Visas Mantis is needed, the consular officer generally informs the applicant that his or her visa is being temporarily refused under Section 221(g) of the INA, pending further administrative processing. After a consular officer decides that a Visas Mantis is necessary for an applicant, several steps are taken to complete the process. The officer or a Foreign Service National drafts a Visas Mantis SAO request, which contains information from the applicant’s application package and interview. The case is then generally reviewed and approved by a consular section chief or other consular official at post before it is transmitted both electronically and through State’s traditional cabling system. Once the request is sent, the State Department’s Bureau of Nonproliferation and other agencies review the information in the cable and respond within 10 working days to State’s Bureau of Consular Affairs. Several agencies, such as the Departments of Commerce and Energy, receive Mantis cases but do not routinely respond to Consular Affairs. State’s Bureau of Consular Affairs receives all responses pertaining to an applicant, summarizes them, and prepares a security advisory opinion. This SAO is then transmitted to the post electronically indicating that State does or does not have an objection to issuing the visa, or that more information is needed. A consular official at post reviews the SAO and, based on the information from Washington, decides whether to deny or issue the visa to the applicant. The officer then notifies the applicant that the visa has been denied or issued, or that more information is needed. Last year, consular officers submitted roughly 20,000 Mantis cases. According to consular officials, the visa is approved in the vast majority of cases. Data provided show that less than 2 percent of all Mantis requests result in visa denial. However, even when the visa is issued, the information provided by the consular posts on certain visa applicants is useful to various U.S. government agencies in guarding against illegal technology transfer. According to State, the Visas Mantis program provides State and other interested agencies with an effective mechanism to screen out those individuals who seek to evade or violate laws governing the export of goods, technology, or sensitive information. This screening, in turn, addresses significant issues of national security. Mantis processing times and the number of cases pending more than 60 days have declined significantly. In February 2004, we reported that the average length of time it took to process Mantis checks in Washington and for State to notify posts was 67 days for Mantis cases initiated from April— June 2003. State reported that the average Mantis processing time in October 2003 was 75 days. However, by November 2004, the processing and notification time for Mantis cases submitted was only about 15 days. Figure 2 demonstrates how the average Mantis processing time for cases submitted by all consular posts has declined since October 2003. State Department data also show significant improvement in the number of Mantis cases pending more than 60 days. In February 2004, we reported that 410 Visas Mantis cases submitted by seven posts in China, India, and Russia had been pending more than 60 days. However, recent data provided by the State Department show that, as of October 2004, only 63 cases (or 9 percent of all pending Mantis cases) had been pending for more than 2 months. Figure 3 shows a breakdown of all pending Mantis cases, sorted by the length of time they have been pending. Consular officials at the posts we visited confirmed that they were receiving faster responses from Washington and that the number of Mantis cases pending more than 60 days had declined. In response to our February 2004 report, State, DHS, and the FBI took several steps to achieve this reduction in Mantis processing times. State submitted a Visas Mantis action plan to DHS in May 2004. Although this plan remained a draft and was not fully implemented, State and other agencies acted on many of the steps called for in the plan and undertook other efforts to address difficulties that students and scholars face in obtaining visas. These actions included establishing a stand-alone Mantis team; providing additional guidance to consular officers; creating an electronic tracking system for Mantis cases; clarifying the roles and responsibilities of agencies involved in the Mantis process; reiterating a policy to give students and scholars priority interviews; and extending the validity period for Mantis clearances. These actions contributed to a decline in overall Mantis processing times. Despite these improvements, some issues remain that, if resolved, could further refine the Mantis process. Consular officers in key Mantis posts continue to have questions about how to implement the Mantis program. Several agencies that participate in the Mantis process are not fully connected electronically to State’s tracking system. In addition, the U.S. visa reciprocity schedule with China (which accounts for more than half of all Mantis cases) limits students and scholars to 6-month, two-entry visas. In order to facilitate travel, State Department officials proposed to extend visa validities for students and scholars on a reciprocal basis. However, the Chinese government did not agree to do so. Table 1 outlines the actions taken to improve Visas Mantis and the outstanding issues that need to be addressed. On February 25, 2004, the Assistant Secretary of State for Visa Services testified before the House Science Committee that the agency had taken steps to increase efficiency in the Visas Mantis process. These steps included creating a stand-alone Mantis team composed of five full-time employees dedicated to processing only Mantis cases. A key State official told us that he believed this action contributed significantly to the decline in Mantis processing times. The Assistant Secretary of State also testified that the agency had established procedures for expediting individual Mantis cases, when appropriate. These procedures involved faxing requests for expedition to the appropriate clearing agencies. Again, a key State official told us that closer cooperation with other agencies had led to faster Mantis processing times. In February 2004, we reported that consular staff at posts we visited said they were unsure whether they were contributing to lengthy waits because they lacked clear guidance on when to apply Visas Mantis checks and did not receive feedback on whether they were providing enough information in their Visas Mantis requests. As a result, State undertook a number of initiatives to provide guidance and feedback to the consular officers responsible for adjudicating cases that require Mantis checks. In 2004, the State Department: Added a special presentation on Visas Mantis to the nonimmigrant visa portion of the Basic Consular Training course. Funded a trip by Nonproliferation (NP) and Consular Affairs (CA) officials to a regional consular conference in China to make presentations and hold discussions with consular officers on specific Mantis issues. Organized a series of videoteleconferences with posts that submit large numbers of Visas Mantis SAO requests to provide direct feedback to embassy and consular officers on the quality of their Visas Mantis requests. Began issuing quarterly reports to the field about Visas Mantis policy and procedural issues to “help consular officers understand the Visa Mantis program better, provide guidance on what cases should be submitted as Visas Mantis SAO requests and what information should be included in requests, and to give feedback on the quality of those requests.” The first quarterly report was issued in March 2004, followed by two more in July and October. Arranged one-on-one meetings with the CA and NP offices for new junior officers assigned to posts with high Mantis volumes. Provided feedback to individual consular officers on the Mantis SAOs they have submitted. This initiative is designed both to recognize consular officers who are submitting well-documented requests that correctly target applicants of concern and to guide officers on what kind of information should be included in requests, depending on the type of visit the applicant plans to make. The direct feedback program also allows State to guide officers as to whether they are submitting SAO requests on the correct applicants. Established a classified webpage through the State Department’s intranet for consular officers to gain access to country-specific and other useful information related to the Mantis program. For example, it identifies websites that officials in NP use when determining how to respond to a Mantis case. Officers at the posts we visited stated that some of these steps were extremely useful, especially those initiatives that allowed for direct interaction with officials from Consular Affairs and Nonproliferation. For example, a junior officer in Guangzhou who had attended the new Mantis presentation in consular training and had held a one-on-one meeting with Consular Affairs stated that these initiatives were useful for understanding how the SAO process works and why it is necessary. Another junior officer in Shanghai stated that a videoteleconference his post held with NP was invaluable for addressing his questions about the Visas Mantis program. Consular officials in China who met with representatives from NP and CA at the consular conference in February 2004 said that they found the opportunity helpful in addressing some of their Mantis-related questions. State developed and implemented an electronic system to track Mantis cases. Beginning in early 2003, State invested about $1 million to upgrade its Consular Consolidated Database to allow for electronic processing and tracking of all SAOs, including Visas Mantis requests, and to eliminate use of its traditional cabling system. This upgrade, called the “SAO Improvement Project” (SAO IP), resulted in a computer-based system that allows posts to send Mantis requests electronically. Previously, consular officers relied solely on the cabling system to transmit Mantis cases to Consular Affairs. As we found in our February 2004 report, this system resulted in Mantis cases getting lost due to cable formatting errors and duplicate cases being rejected by the FBI database. By attaching a unique identifier to each Mantis case, the SAO IP ensures that cases can be easily tracked. As an added measure, a block is built into the system that prevents consular officers from resubmitting Mantis requests on the same visa application. The SAO IP allows the State Department to more easily produce and track important statistics. For example, it enables State to follow average Mantis processing times, the number of Mantis cases submitted by each post, and the amount of time each step in the Mantis process is taking. Officials at posts we visited told us that they like being able to track individual cases as they go through the interagency process in Washington. In both Moscow and Kiev, for example, the SAO IP institutionalizes and expands upon tracking efforts that posts had begun on their own. Officials in Beijing told us that when they receive a public inquiry on a pending Mantis case, they can use the tracking system to determine the status of the case. In July 2004, the FBI, State, and DHS reached an agreement that fundamentally changed the FBI’s role in the Visas Mantis process. Officials from these agencies had determined that the FBI could fulfill its law enforcement role in the Mantis process without routinely clearing Mantis cases. Under the new “no objections policy,” the State Department does not have to wait for an FBI response before processing Mantis cases, but the FBI continues to receive information on visa applicants subject to Mantis checks. Prior to this change, State’s policy was to wait for a response from the FBI before proceeding with each Visas Mantis case. If the FBI requested that State “put a hold” on an individual Mantis case, State could not provide a response to post on the case until the hold was removed. This policy resulted in a backlog of almost 1,000 cases and contributed to lengthy wait times for visa applicants. As we reported in February 2004, it took the FBI an average of about 29 days to complete clearances on Mantis cases. In fact, FBI clearance often took longer than any other step in the Mantis process. Once cases had been cleared by the FBI, it could take another 6 days before State was informed. Some of the Mantis cases in the random sample we reviewed took more than 100 days to be processed at the FBI. The FBI’s new role allows State to process Mantis cases more easily. As the Bureau of Consular Affairs reported to consular posts in October 2004, “the change in the FBI’s role has made it easier for us to respond to most Mantis SAO requests more expeditiously.” The new agreement also allowed State to clear about 1,000 Mantis cases that the FBI had maintained on hold, many of them for a “very long time,” according to State officials. Consular officers we spoke to in China, Russia, and Ukraine confirmed that they were beginning to receive clearances on Mantis cases that had been pending for long periods of time. In November 2004, the remaining agencies responsible for clearing Mantis cases agreed to respond to the Bureau of Consular Affairs within 10 working days. Before this agreement, the agencies had 15 working days to respond to State. As a result, the total Mantis processing time could not be lower than about 20 calendar days (to account for weekends). According to Consular Affairs, under the new rule, State should be able to achieve total Mantis processing times of about 15 to 17 calendar days. In July 2004, the Secretary of State reminded posts via cable that they should give priority scheduling to persons applying for F, J, and M visas. As explained in the cable, students and exchange visitors are often subject to deadlines, so posts must have well-publicized and transparent procedures in place for obtaining priority appointments for them. Data show that this policy is critical for ensuring that students and scholars obtain their visas in time to meet their deadlines. For example, between January and September 2004, non-student, nonimmigrant visa applicants applying in Shanghai could expect to wait between 1 and 2 months to obtain an interview. Data provided by the State Department also point to long interview wait times for non-student or scholar visa applicants at other posts. As of October 7, 2004 (when visa demand has usually declined from summer levels), the nonimmigrant visa interview wait time was 32 days in Beijing, 49 days in Guangzhou, and 34 days in Kiev. Post-specific data show that interview wait times for students are much shorter. For example, on June 15, 2004 (when visa demand is typically high), students and scholars in Shanghai could get an interview within 13 days, while other nonimmigrant visa applicants had to wait 56 days. Figure 4 illustrates that, in June 2004, a peak visa application period, non-student visa applicants could wait as long as 87 days to receive visas, while student applicants could receive visas in as few as 44 days. On February 11, 2005, State issued a cable to consular posts establishing new maximum validities for Mantis clearances, thereby allowing students and others to reapply for visas without undergoing frequent Mantis checks. Previously, Mantis clearances were valid for 1 year. Under that rule, if an applicant reapplied for a visa more than 1 year after the processing of the original Mantis check, he or she would have to undergo another Mantis check before receiving the new visa. Organizations representing the international scientific community argued that this validity period was too short. For example, foreign students attending 4-year college programs had to renew their Mantis clearances each year. Under the new validity periods, students can receive Mantis clearances valid for the length of the approved academic program up to 4 years, and temporary workers, exchange visitors, and intracompany transferees can receive clearances for the duration of an approved activity for up to 2 years. State estimates that this change will allow the agency to cut in half the total number of Mantis cases processed each year. The new validity periods are the result of negotiations between State, DHS, and the FBI. Although State and DHS proposed extending Mantis clearances in the summer of 2004, the FBI argued that an extension in Mantis clearances would significantly reduce its capability to track and investigate individuals subject to the Visas Mantis program. The FBI informed us that without the same frequency of automatic Mantis notifications, it would have far less knowledge of when these individuals enter the country, where they go, and what they are supposed to do while here. As a result, the FBI made its agreement to State’s and DHS’s proposal conditional on receiving access to the US-VISIT system and the Student and Exchange Visitor Information System (SEVIS). US-VISIT is housed in DHS and is a governmentwide program for collecting, maintaining, and sharing information on certain foreign nationals who enter and exit the United States. SEVIS is a system that maintains information on international students and exchange visitors and their dependents in the United States. In February 2005, the FBI and DHS reached agreement on the terms of the FBI’s access to these two systems, allowing the proposed extension of Mantis clearances to take effect. China and Russia account for roughly 76 percent of all Mantis cases. However, we found that some consular officers at these posts remain confused about how to apply the Mantis program. For example, Beijing consular officers, some of them new to the post, consistently told us that they needed more clarity and guidance regarding how to use the Technology Alert List (TAL). According to a key consular official in Beijing, because these officers generally do not have scientific or technical backgrounds, they often do not understand what entries on the TAL mean or whether the visa applicant has advanced knowledge about the subject he or she plans to study in the United States. They are also confused about how to apply vague, seemingly benign categories. For example, officers in Beijing did not know whether to continue submitting Mantis requests for all individuals that fall under the category of “Communications – wireless systems, advanced,” even if the visa applicant works for a foreign multinational corporation that is not a Chinese government-owned telecom enterprise. Few of the consular officers we spoke with in China, Russia, or Ukraine were familiar with the quarterly reports issued by Consular Affairs on Mantis issues. The only officer aware of the classified webpage maintained by the Consular Affairs Bureau told us that he did not find it useful because it had very little information on it and because it was hard for him to access the classified computer system, which is housed in a separate building far from the consular section. We found that consular officers at the consular posts we visited did not have regular opportunities to interact directly with officials from the Nonproliferation Bureau or the Consular Affairs Bureau knowledgeable about the Mantis program. For example, representatives from State’s Nonproliferation Bureau and Consular Affairs Bureau have visited just one consular conference—the February 2004 conference in China. Although new consular officers are given the option to meet with NP and CA officials before traveling to post, State does not require these one-on-one meetings for officers assigned to key Mantis posts. Although China accounts for more than half of Mantis requests submitted, only one of the country’s six consular posts has held a videoteleconference. Kiev requested a videoteleconference in early 2004, but had been unable to schedule one, as of December. Finally, in Beijing, only one of the officers who had attended the consular conference in February was still at post. Several law enforcement, intelligence and non-intelligence agencies that receive Mantis cases, including the Departments of Commerce and Treasury, are not fully connected to State’s electronic tracking system. This system, in addition to allowing State to track individual cases, was designed to eliminate the use of cables for the transmission of SAO cases because, according to State, they were “the source of garbled information and other errors that resulted in lost or delayed cases that required human intervention.” For example, as we found in our February 2004 report, 700 Mantis cables that were sent from Beijing in fall 2003 did not reach the FBI. It took Consular Affairs about a month to identify that there was a problem and to provide the FBI with the cases. However, since several of the agencies that receive Mantis cases are not yet fully connected electronically to the system they continue to receive Mantis cases through State’s traditional cabling system. For the time being, consular officers send Mantis cases both electronically and by cable. Those agencies that are responsible for routinely clearing Mantis cases provide responses to State on compact discs that must be hand-carried between the agencies. As we found previously, this use of cables and couriers can lead to unnecessary delays in the process. State officials informed us that they are working to establish full connectivity with other agencies. However, State’s goals for fully connecting certain agencies to the system have not been met. Further, State has not set milestones for connecting the remaining agencies to the system. In July 2004, State’s Assistant Secretary for Congressional Relations wrote in a letter to the House Science Committee and other House and Senate committees that he expected the FBI to begin relying on the network on a regular basis by the end of that month. State and the FBI also signed a memorandum of understanding in July outlining the terms of the FBI’s electronic connectivity to the system. However, it was not until December 2004 that the FBI had developed the ability to gain access to State’s electronic tracking system to test the connection and discontinue using the cabling system. Although the FBI no longer actively clears Mantis cases, all agencies and bureaus that receive Mantis cases, regardless of whether they routinely clear cases, must be connected electronically to the system before use of the cabling system can be eliminated. State’s goal was to establish connectivity with another intelligence agency responsible for clearing Mantis cases by the end of 2004, but an agency official told us that a deadline of February 2005 was more realistic. State has not set milestones for connecting the remaining agencies that receive Mantis cases to the tracking system. A key agency official told us that providing full electronic connectivity to all agencies that receive Mantis cases will be a gradual process. China has one of the strictest visa reciprocity schedules for students and scholars. Under the United States’ reciprocity agreement with China, visas for F-1 and J-1 visa holders are only valid for up to 6 months, with two entries into the United States allowed. According to a key State official, the agency’s instructions to consular officers are to give single-entry, 3- month visas for applicants who undergo Mantis checks. This reciprocity schedule is one of the primary concerns of the international scientific community. Under the reciprocity schedule, if a Chinese citizen in the United States on an F or J visa leaves the United States, he or she will have to reapply for a visa. In 2004, State Department officials entered negotiations with the Chinese government to revise the visa reciprocity schedule for business travelers, tourists, and students. However, in December, State officials informed us that, while the Chinese government agreed to extend visa validities for business travelers and tourists, it did not agree to do so for students and scholars. While the new agreement with the Chinese government may address some of the concerns that the business community and tourism industry hold about travel to the United States, students and scholars will still need to reapply for visas frequently. In 2004, State, DHS, and the FBI collaborated successfully to reduce Mantis processing times. However, opportunities remain to further refine the Visas Mantis program and facilitate legitimate travel to the United States. As we reported in 2004, the use of the cabling system to transmit Mantis cases can lead to unnecessary delays in the process. The State Department has also noted that the cabling system is the source of garbled information and other errors. However, agencies continue to receive cases via cable because they are not yet fully connected electronically to State’s computer database. State has not established milestones for connecting these agencies to the electronic tracking system. Additionally, because consular officers have only a few minutes to determine whether a visa applicant who appears at their interview window needs to undergo a Mantis check, it is critical that they fully understand the purpose of the Mantis program. Our work suggests that consular officers learn best through direct interaction with those agency officials responsible for implementing the Mantis program in Washington. However, because consular officers at key Mantis posts do not routinely have opportunities for such interaction, there is a risk that they may submit Mantis cases on applicants who do not need them or fail to submit cases when appropriate. Further, officers may fail to include information in their Mantis requests that is most useful to agencies in Washington. In order to further streamline the Visas Mantis process, we recommend that the Secretary of State, in coordination with the Secretary of Homeland Security, take the following two actions. In order to eliminate use of the cabling system in the Mantis process, establish milestones for fully connecting all necessary U.S. agencies and bureaus to the computer system used to track and process Mantis cases. Provide more opportunities for consular officers at key Mantis consular posts to receive guidance and feedback on the Visas Mantis program through direct interaction with agency officials knowledgeable about the program. These opportunities could include, among other initiatives, mandatory one-on-one meetings with officials from the Bureaus of Consular Affairs and Nonproliferation for new consular officers before they travel to post; additional visits by State officials to consular conferences; and more frequent videoteleconferences with posts that submit large numbers of Mantis requests. We provided a draft of this report to the Departments of State, Homeland Security, and Justice for their comments. State, DHS, and Justice provided written comments on the draft (see appendixes II, III, and IV, respectively). State commented that it had already made considerable progress with regard to the report’s recommendations and outlined the actions it had taken to do so. For example, State has committed to sending representatives from its Consular Affairs and Nonproliferation Bureaus to India, China, and Russia to engage in on-site discussions of Mantis issues with consular officers. In addition, State is in the process of negotiating and signing memoranda of understanding with five U.S. agencies to share Mantis data electronically. DHS expressed appreciation for our work to identify actions to improve the Visas Mantis process and stated that it will pursue completion of GAO’s recommendations. Justice responded to a recommendation included in the draft report that directed the Secretary of Homeland Security and the Attorney General to set a formal timeframe for completing negotiations on FBI access to US- VISIT and SEVIS. Because the two agencies reached agreement prior to publication of the final draft, the recommendation is not included in this report. The Department of Justice also provided technical comments, which we have incorporated where appropriate. We are sending copies of this report to other interested Members of Congress. We are also sending copies to the Secretary of State and the Secretary of Homeland Security. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4128 or fordj@gao.gov. Staff contacts and other key contributors to this report are listed in Appendix V. The scope of our work covered improvements to and implementation of the Visas Mantis program between February 2004 and February 2005. To determine how long it takes to process Visas Mantis checks, we obtained and analyzed data from the State Department’s electronic tracking system for Security Advisory Opinions (SAOs). Specifically, we reviewed “SAO Processing Statistics” reports for all Mantis requests submitted to the State Department between April 1, 2004, and August 31, 2004, as well as other Mantis statistics produced by the State Department. These reports showed the average total processing time (in calendar days) for Mantis cases worldwide. To assess the reliability of State’s data on Visas Mantis cases, we (1) interviewed State officials responsible for creating and maintaining the electronic tracking system used for Mantis cases, (2) observed use of the tracking system, and (3) examined data collected through the tracking system. We noted in our report that average Mantis processing times, as calculated through State’s tracking system, do not take into account Mantis cases that are still pending. As a result, reported average Mantis processing times can change as cases that have been pending are cleared. State may also calculate average Mantis processing times based on the date on which a consular post initially drafted a Mantis case, rather than the date on which the consular post submitted the final draft to Washington. As a result, total Mantis processing times can seem longer than they really are. Despite these limitations, we determined that the data were sufficiently reliable for the purposes of identifying trends in Mantis processing. To identify and assess actions taken to implement our recommendation to improve the Visas Mantis program, we obtained documentation from key U.S. agencies, primarily the State Department, interviewed officials from these agencies, and observed training classes for new consular officers at the State Department’s Foreign Service Institute. We reviewed the Immigration and Nationality Act, the Foreign Affairs Manual, the Bureau of Consular Affairs’ quarterly reports on Visas Mantis, and other cables and related documents from that bureau. In Washington, we interviewed officials from the Departments of State, Homeland Security, and Justice. At State, we met with officials from the Bureau of Consular Affairs and the Bureau of Nonproliferation. At the Department of Homeland Security, we met with officials from the Directorate of Border Transportation and Security. At the Department of Justice, we met with officials from the Federal Bureau of Investigation’s Name Check Unit. We requested a meeting with Department of Justice and FBI officials to discuss negotiations with DHS regarding access to US-VISIT and SEVIS; they agreed to answer questions in writing. In August 2004, we observed classes at the Foreign Service Institute for newly assigned consular officers. As part of that training, we attended the Visas Mantis briefing that had been added to the curriculum for new officers in response to our recommendation in the February 2004 report. To identify whether there were any remaining issues that affect the total amount of time it takes for science students and scholars to obtain visas, we analyzed data on interview wait times, spoke with representatives of various educational organizations, and observed a roundtable discussion on Mantis issues sponsored by the Senate Foreign Relations Committee. We obtained data on interview wait times at consular posts worldwide from State’s Bureau of Consular Affairs. We also obtained information on interview wait times from the consular posts in China and Russia. We met with representatives from the National Academies of Science, NAFSA: Association of International Educators, and the Alliance for International Education. The roundtable discussion we attended involved officials from the Departments of State and Homeland Security, as well as representatives from the International Institute for Education; the Association of American Universities; the National Institutes of Health; the National Academies of Science; NAFSA: Association of International Educators; and others. Representatives from various colleges and universities were also in attendance. We conducted fieldwork at five visa-issuing posts in three countries— China, Russia, and Ukraine. We chose these countries because they are leading places of origin for international science students and scholars visiting the United States and because they account for 78 percent of all Mantis cases. During our visits to these posts, we observed visa operations, reviewed selected Visas Mantis data, and interviewed consular staff about the Visas Mantis program. In China, we met with consular officers at the U.S. Embassy in Beijing and the consulates in Shanghai and Guangzhou. We also met with the Deputy Chief of Mission, as well as officials from the Office of the Defense Attaché; the Office of Environment, Science, Technology, and Health; the Office of Public Diplomacy; and the Foreign Commercial Service. In Beijing, we observed a meeting of the American Chambers of Commerce in China, where they discussed their experience with the Visas Mantis program. In both Shanghai and Guangzhou, we met with the Consul General. In Russia, we met with consular officers at the U.S. Embassy in Moscow. We met with the Consul General and his Deputy, as well as officials from the Department of Energy; the Office of Environment, Science, Technology, and Health; and Public Affairs. In Ukraine, we met with consular officers at the U.S. Embassy in Kiev. We met with the Deputy Chief of Mission, the Consul General and her Deputy, as well as officials from the Department of Energy; the Office of Public Affairs; and the Office of the Defense Attaché. Furthermore, in both Russia and Ukraine we held meetings with various organizations that sponsor summer work/travel exchanges, and they expressed their opinions and observations about the effects of U.S. visa policy on their programs. We conducted our work from July 2004 through February 2005 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Department of Homeland Security’s letter dated January 28, 2005. 1. We have revised the report to reflect the fact that the FBI requested access to both US-VISIT and the Student and Exchange Visitor Information System and that US-VISIT does not contain SEVIS. The following are GAO’s comments on the Department of Justice’s letter dated February 7, 2005. 1. Because the Departments of Homeland Security and Justice reached agreement on the FBI’s access to US-VISIT and SEVIS prior to publication of the final draft, we did not include the recommendation in this report. The validity period for certain Visas Mantis clearances was extended on February 11, 2005. In addition to those named above, Elizabeth Singer, Carmen Donohue, Maria Oliver, Judith Williams, Mary Moutsos, Joe Carney, Martin de Alteriis, and Etana Finkler made key contributions to this report.
In February 2004, GAO reported that improvements were needed in the time taken to adjudicate visas for science students and scholars. Specifically, a primary tool used to screen these applicants for visas (the Visas Mantis program) was operating inefficiently. We found that it took an average of 67 days to process Mantis checks, and many cases were pending for 60 days or more. GAO also found that the way in which information was shared among agencies prevented cases from being resolved expeditiously. Finally, consular officers lacked sufficient program guidance. This report discusses the time to process Mantis checks and assesses actions taken and timeframes for improving the Mantis program. Mantis processing times have declined significantly. In November 2004, the average time to process a Mantis check was about 15 days, far lower than the average of 67 days we reported previously. The number of Mantis cases pending more than 60 days has also dropped significantly. Although an action plan that the State Department (State) drafted was not fully implemented, State and other agencies took several actions in response to our recommendations to improve Visas Mantis and to facilitate travel by foreign students and scholars. These actions included (1) adding staff to process Mantis cases, (2) providing additional guidance to consular officers, (3) developing an electronic tracking system, (4) clarifying roles and responsibilities of agencies involved in the Mantis program, (5) reiterating State's policy of giving students and scholars priority interviews, and (6) extending the validity of Mantis clearances. Nonetheless, some issues remain unresolved. Consular officers at posts we visited continue to need guidance on the Mantis program, particularly through direct interaction with State officials knowledgeable about the program. Several agencies that receive Mantis cases are not fully connected to State's electronic tracking system. This can lead to unnecessary delays in the process. Finally, students and scholars from China are limited to 6-month, two-entry visas. The Chinese government has rejected a proposal by the United States to extend visa validities, on a reciprocal basis, for students and scholars.
The lowest wage that a worker can earn is generally the federal minimum wage. The Fair Labor Standards Act of 1938 first established a minimum wage of 25 cents per hour, which has been raised numerous times eventually reaching its current level of $7.25 per hour. Since 1980 the federal government has increased the federal minimum wage various times; however, the actual purchase power after adjusting for inflation (i.e., the real value) of the minimum wage has trended downward (see fig. 1). Many states have enacted their own minimum wage laws, and under the provisions of the Fair Labor Standards Act of 1938, an individual is generally covered by the higher of the state or federal minimum-wage rates. As of January 1, 2017, according to the Department of Labor, 29 states and the District of Columbia had minimum wage rates above the federal minimum rate, and 2 states had minimum wage rates below the federal minimum rate. State minimum wages ranged from $5.15 per hour in Georgia and Wyoming to $11.50 per hour in the District of Columbia (see fig. 2). According to BLS data, hourly workers earning at or below the federal minimum wage of $7.25 per hour made up 1.6 percent of total wage and salary workers.in 2016. The number of minimum wage workers since 1995 ranged from a low of 1.7 million in 2006 to a high of 4.8 million in 1997 (see fig. 3). According to BLS, more than one-half of hourly workers earning the federal minimum wage were employed part-time in 2016, in contrast to about one-quarter of all hourly workers. By working part-time—defined by BLS as 1 to 34 hours per week—these workers are less likely to receive health insurance and other benefits from their employers. Research has also shown that many contingent workers, including some part-time workers, experience fluctuations in their earnings and employment status, making them more likely to seek assistance from federally funded social safety net programs, if eligible. As we previously reported, the official poverty measure used to provide information on how many people are “in poverty” in the United States was developed in the 1960s, based on the cost of food at that time. Each year Census updates its poverty thresholds—the income thresholds by which households are considered to be in poverty depending on family size. In 2016, the poverty thresholds ranged from $11,511 to $53,413, depending on family size and the age of the head of household (see table 1). The Department of Health and Human Services (HHS) uses these poverty thresholds to update its poverty guidelines each year. These guidelines are used as an eligibility criterion of a number of federal programs, including certain low-income programs. We also previously reported that the official poverty measure had not changed substantially since it was first developed, and concerns about its inadequacies had resulted in efforts to develop a new measure. For example, poverty threshold (the income level used to determine who is “in poverty” each year) is based on three times the cost of food and does not take into account the cost of other basic necessities, such as shelter and utilities. Additionally, the official poverty measure considers cash income in determining a household’s income, but does not include additions to income based on the value of noncash assistance (e.g., food assistance) or reductions based on other necessary living expenses (e.g., medical expenses or taxes paid). A National Academy of Sciences panel on poverty and an interagency technical working group suggested ways that a new poverty measure could address some of these concerns. Based on these suggestions, Census, with support from BLS, developed a new poverty measure—the Supplemental Poverty Measure (SPM)—in 2010. Unlike the official poverty measure, the SPM adds other forms of non- cash benefits, such as tax credits and SNAP benefits, and subtracts expenses, such as federal, state, and local income taxes, when calculating a household’s resources. We have previously reported that federally funded social safety net programs generally provide targeted assistance to specific groups within the low-income population, such as people with disabilities and workers with children. In 2015, we identified more than 80 federal programs (including 6 tax expenditures) that provided aid to individuals and families who may earn too little to meet their basic needs, cannot support themselves through work, or are disadvantaged in other ways. According to the Congressional Research Service, five of these programs— Medicaid, SNAP, TANF, EITC, and ACTC—accounted for $551.2 billion in spending in fiscal year 2015, or two-thirds of total federal spending on low-income assistance programs in that year. Eligibility criteria vary for these five federally funded programs and can include both financial and nonfinancial criteria. As we have previously reported, some programs are administered by states, which may apply their own eligibility criteria. Assistance may be provided to an individual, a family, or household. More recently, we reported that these programs’ eligibility criteria varied significantly in terms of the income limits used. In addition, we found that programs differed in the ways they measured applicants’ income, the standards and methods used to determine the income limit (i.e., the maximum income an applicant may have and still be eligible for the program), whether this limit is set nationwide or varies by state or locality, and the amount of the income limit itself. We also found that rules for determining the maximum allowable income that an applicant may have a recipient could earn and still be eligible, the amounts themselves, and whether they are set nationwide or vary by state or locality, also varied significantly. For example, in TANF, income limits are determined by states. We found that some states use HHS’s poverty guidelines, which are adjusted annually, while others had a limit set in state law, which is not adjusted. In addition to having income tests, we found that some programs limit assets that an eligible individual or family may hold, while others do not. Furthermore, we found that programs may have ongoing requirements that families must satisfy to remain enrolled and receiving assistance. For example, we found that some programs periodically require participants to recertify that their income remains below the income limit. About 40 percent of U.S. workers ages 25 to 64 earned hourly wages of $16 or less (in constant 2016 dollars) over the period 1995 through 2016, according to our analysis of CPS data (see fig. 4). In each of the 6 years we reviewed, an estimated 1 to 5 percent of these workers earned an hourly wage or less of that year’s federal minimum wage, about 17 percent earned above federal minimum wage to $12 per hour, and about 18 percent earned above $12 per hour to $16 per hour. The stagnation of low-wage workers in the workforce as depicted in our analysis of CPS data is also consistent with the literature on income inequality. Recent studies have found that while average wages experienced little or no change from 1973 through 2011 (when held in constant 2011 dollars), income inequality increased as a result of income growth among high-wage workers. Low-wage workers, on average, worked fewer hours per week from 1995 through 2016 than similar workers earning higher wages, according to our analysis of CPS data. In each of the years we reviewed, our estimates showed that workers who earned the federal minimum wage or less worked an average of about 30 hours per week, workers earning above the federal minimum wage to $12 per hour worked an average of about 33 hours per week, and those earning $12.01 to $16 per hour worked an average of about 37 hours per week (see fig. 5). One option that a worker has to increase earnings is working multiple jobs. Our analysis of CPS data found that few low-wage workers held multiple jobs and low-wage workers tended to work multiple jobs at the same rate as workers earning higher wages. Specifically, our estimates showed that about 5 percent of low-wage workers in each low-wage category worked multiple jobs, or about the same percent as workers earning more than $16 per hour in each of the years we reviewed. The combination of low wages and limited hours can affect a worker’s earnings and potential eligibility for federal social safety net programs. The reported growth of involuntary part-time workers—workers who would prefer to work more hours but are limited by economic conditions such as employers cutting hours or lack of full-time job opportunities—has likely reduced the average hours that low-wage workers can work. According to BLS, the number of these involuntary part-time workers peaked during the Great Recession and has yet to return to pre-recession levels. In 2016, BLS estimated that 5.6 million workers were involuntary part-time workers, of which about 61 percent said they were part-time because of business conditions and 34 percent said they could only find part-time employment. In previous reports, we found that low-wage workers employed on a contingent basis were more likely to earn low wages, less likely to have employer-sponsored benefits, and more likely to rely on social safety net programs. Low-wage workers who provide the sole income for a family may have income that is low enough to qualify them for federally funded social safety net programs. As shown in table 2, a hypothetical low-wage single parent who served as the sole income provider for a family of three would qualify for several programs of the five that we included in our analysis provided any other applicable eligibility requirements were also met. The same five industries consistently employed the majority of low-wage workers from 1995 through 2016—leisure and hospitality, education and health, professional and business services, wholesale and retail trade, and manufacturing. Specifically, in each of the years we reviewed, these five industries employed approximately 70 percent of low-wage workers. Comparatively, these five industries also employed about 62 percent of workers earning more than $16. (See fig. 6). Our estimates showed the highest concentration of low-wage workers to be in the health and education industry with an estimated 22 to 25 percent of workers in each of our wage categories in this industry. Occupational Concentration of Low-Wage Workers The following six occupational categories employed the majority of low-wage workers: Food preparation and serving - fast food workers, cafeteria, and restaurant workers Sales - cashiers, retail salespersons, and sales representatives Office and administrative support - secretaries and administrative assistants, payroll and time-keeping clerks, and mail carriers Building grounds cleaning and maintenance - janitors and building keepers, maids and housekeeping workers, and grounds maintenance workers Personal care and service - hairdressers and barbers, child care workers, and home care aides Transportation and materials moving - bus drivers, taxi drivers, ambulance drivers, and parking lot attendants Low-wage workers were also highly concentrated in six occupational categories in 2016—food preparation and serving, sales, office and administrative support, building and grounds cleaning and maintenance, personal care and service, and transportation and material moving. (See textbox above for more detailed descriptions of these occupational categories). Our estimates showed that half or more of low-wage workers were employed in one of these six occupational categories in 2016 whereas 26 percent of higher-wage workers were employed in these categories (see fig. 7). Although low-wage workers were concentrated in these six occupations, the amount of concentration varied by the amount of wages earned. For example, our estimates showed that workers earning hourly wages of federal minimum wage or below in 2016 were most concentrated in personal care and services, sales, and food service and preparation, with an estimated 11 to 12 percent of these workers participating in each occupation. In contrast, our estimates showed that workers earning $12.01 to $16 per hour were concentrated in office and administrative support occupations, with an estimated 18 percent of these workers participating in this occupation. While low-wage workers had lower levels of education, on average, than workers earning higher wages, increases in their educational attainment from 1995 through 2016 generally did not lead to higher wages. Specifically, in each year we reviewed, about 68 percent of low-wage workers and about half of higher-wage workers had a high school diploma. However, the proportion of low-wage workers with college degrees also increased during this time. Our estimates showed that the percentage of workers earning $12.01 to $16 per hour with college degrees increased from 16 percent in 1995 to 22 percent in 2016. A similar trend occurred in the other low-wage categories. For example, the percentage of workers who had at least a high-school diploma yet earned the federal minimum wage or below increased from an estimated 70 percent in 1995 to 80 percent in 2016. Families with a low-wage worker ages 25 to 64 shared several common characteristics, according to our estimates based on CPS data. For example, our estimates showed that the majority of these families were not in poverty, had just one low-wage worker, and derived 80 percent or more of their family income from wages and salaries. In addition, on average, married families had two workers (contributing to a family income that often exceeded the poverty threshold); families with children had two children; and between 5 and 9 percent of families included someone over age 65. The majority of families with a low-wage worker were not in poverty, yet the percentage of families that were in poverty persisted in each of the years we reviewed and in each of the low-wage categories we examined. While higher wages were generally associated with a lower percentage of families in poverty in a given year, poverty levels among families of low-wage workers changed little in the past 2 decades across all three wage categories that we examined. (See fig. 8.) In almost all of the years we reviewed, the presence of a child in a family with a low-wage worker was associated with higher rates of poverty regardless of the worker’s wage category or marital status. For example, across all low-wage categories we examined from 1995 through 2016, 4 to 20 percent of married families with children were in poverty compared to 7 percent or fewer of married families without children. However, in 1995 the higher rate of poverty was not statistically different based on children for unmarried households in all of the wage categories. In addition, while poverty was most prevalent among families with a worker earning the federal minimum wage or below, it was most prevalent among single-parent families earning this amount. (See fig. 9.) Our analysis of CPS data found sizeable percentages of families with a low-wage worker who had incomes just above the poverty threshold, potentially limiting their access to certain federal social safety net programs. The estimated percentage of families with incomes placing them just beyond the poverty thresholds remained relatively unchanged across the years we reviewed (see table 3). Families with a low-wage worker may be eligible for and use one or more federal social safety net programs. The largest of these programs is Medicaid, which HHS reported had 69 million individuals enrolled in April 2017. Our estimates based on CPS data found that the percentage of families with a low-wage worker enrolled in Medicaid rose significantly over the past 2 decades, almost tripling for families with a worker earning more than the federal minimum wage between 1995 and 2016 (see fig. 10). In 2016, about 29 percent of families with a worker earning federal minimum wage or below, 31 percent of families with a worker earning above federal minimum wage to $12 per hour, and 21 percent of families with a worker earning $12.01 to $16 per hour were enrolled in Medicaid. This growth in enrollment coincided with a rise in overall Medicaid enrollment (i.e., not just families with a low-wage worker), which according to HHS, doubled during this time frame. Researchers have noted that key factors affecting the growth in Medicaid enrollment in the past decade were the 2008 recession and the expansion of Medicaid in some states under the Patient Protection and Affordable Care Act. Families with a low-wage worker may also be eligible for and use other federal social safety net programs (e.g., TANF, SNAP, EITC, and ACTC). Our estimates showed that 5 percent or less of families with a low-wage worker received TANF cash assistance at least once in the prior calendar year from 1995 through 2016. In previous work, we reported that as of July 2015, TANF income eligibility thresholds for a family of three ranged from $0 to $1,660 per month, depending on the state, with a median income threshold of $817. Given these thresholds, most low-wage workers, including workers earning federal minimum wage or below, would generally earn too much to qualify for TANF cash assistance in most states. In this report, our estimates showed that the percentage of families with a worker earning more than the federal minimum wage receiving SNAP benefits at least once in a calendar year doubled from 1995 to 2016. In 2016, about 16 percent of families with a worker earning federal minimum wage or below, 15 percent of families with a worker earning above federal minimum wage to $12 per hour, and 8 percent of families with a worker earning $12.01 to $16 per hour received SNAP benefits. The U.S. Department of Agriculture (USDA), which administers SNAP, has reported that the overall increase in SNAP enrollment from 1995 to 2014 was influenced by economic conditions, such as higher poverty rates during recessionary periods, and policy changes, such as increases the value of a vehicle that could be excluded when calculating a family’s income. Finally, our estimates showed that EITC eligibility generally increased among families with a worker earning above federal minimum wage over this time frame, with an estimated 23 to 35 percent of those families eligible in 2016; whereas eligibility for the ACTC generally remained unchanged among families with a low-wage worker. A low-wage worker’s family type also influenced the extent that families used social safety net programs. When comparing program usage across different family types, we generally found that regardless of the low-wage workers’ wages, a greater percentage of single-parent families used selected programs than the other family types we examined. For example, among families with a worker earning federal minimum wage or below in 2016, our estimates showed that two-thirds of married families without children and about half of married families with children used none of the aforementioned programs. In contrast, more than half of single-parent families used three or more of the programs (see fig. 11). Agencies that administer the selected social safety net programs indicated that eligible working families participate in these programs at a lower rate than the total eligible population for reasons that are not well known. For example, IRS reported that in 2013, 80 percent of eligible filers—all of whom had earnings—claimed the EITC, with state rates ranging from 72 percent in the District of Columbia to 85 percent in Hawaii. Additionally, USDA estimates show that a significantly smaller percentage of eligible households with a wage earner participated in SNAP than other eligible households—70 percent compared to 83 percent in fiscal year 2014. Although some research has examined the reasons why eligible people choose not to participate in social safety net programs, our literature review found few studies that focused specifically on working families rather than the general eligible population, none of which had findings that were generalizable to the experiences of working families nationwide. Our interviews with state and local officials for the selected social safety net programs, representatives from nonprofit organizations, and researchers helped provide additional context for the experiences of working families. Specifically, the officials we interviewed identified several reasons why families with a low-wage worker may decline to participate in assistance programs for which they are eligible. Assumed ineligibility. Some workers may assume that earning income at a job automatically makes them ineligible for benefits, even if their earnings are low enough to qualify for assistance. A program official in Atlanta told us that eligible families are generally aware of the existence of a program, but assume they have to hit “rock bottom” before they can qualify for assistance. A researcher also told us that families that had exceeded the eligibility threshold in the past may assume they remain ineligible, even if their income has decreased. Lack of time. Some workers may find it difficult to take time off from work to apply for benefits in person at a program office, if required. Some states have implemented online or phone application processes to make programs more accessible to working families. However, as a nonprofit director in Santa Fe cautioned, not all families have Internet access and the proficiency required to complete an application online. Complex program requirements. Some families may find program documentation requirements complex and difficult to fulfill. For example, the state TANF application in one city we reviewed requires applicants to provide information verifying their earned and unearned income, money in the bank, immigration status, identity, vehicle registration, and immunizations of children under 7 years of age. Other program documents state that beneficiaries must also resubmit financial information, along with verification of their children’s school attendance, semi-annually or whenever changes occur that would affect their eligibility. Researchers have found that recent changes in the SNAP income documentation requirements, such as requiring less frequent recertification of income and eligibility, increased participation and retention of SNAP benefits. In addition, some states have combined applications for TANF, SNAP, and/or Medicaid into a single form, reducing the amount of paperwork that applicants must submit. Stigma. Some working families may be especially sensitive to the stigma associated with some social safety net programs, because their earnings did not make them as self-sufficient as they hoped. To avoid this stigma, according to several officials we interviewed, eligible working families may choose not to participate in a program if their income is sufficient for them to survive without assistance. For example, a 2007 study of 115 EITC recipients in the Boston area found that respondents who had received TANF benefits desired to leave the program as soon as possible. In contrast, according to a caseworker in San Francisco, while unemployed families face the same stigma, they cannot afford to refuse any benefits for which they qualify. Minimal benefit amounts. SNAP, TANF, EITC, and ACTC have means-tested structures that may reduce benefit levels as recipients’ incomes increase. Several officials told us that at some point the benefits may become too small to be worth the effort of obtaining them. For example, a study of low-income customers of a large tax preparation service in two counties in California during the 2007 tax season found that 16 percent of those who had previously applied for SNAP had stopped pursuing the benefits because the “hassle was not worth it.” Confusing tax rules. Some families may find the process of claiming the EITC and ACTC on their tax returns to be confusing. For example, a nonprofit director in the District of Columbia told us that applying for these tax credits can be complex, especially the requirements for qualifying children and filing status, and families claiming the credits may need high quality and costly assistance to prepare their taxes. To help mitigate this complexity, IRS encourages individuals who may qualify for the tax credits to visit one of the more than 12,000 free tax help locations across the country, but this task may also interfere with some individuals’ working hours. We provided a draft of this report to the Secretary of Labor and the Secretary of Commerce for comment. Each agency provided technical comments, which we incorporated in the report, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Department of Labor, the Department of Commerce, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Cindy Brown-Barnes at (202) 512-7215 or Oliver Richard at (202) 512- 8424.You may also reach us by e-mail at brownbarnesc@gao.gov or richardo@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who contributed to this report are listed in appendix IV. Our review focused on the following questions: (1) what are the characteristics of the low-wage workforce and how have they changed over time, (2) to what extent are families with low-wage workers in poverty, and (3) to what extent do families with low-wage workers participate in selected social safety net programs and what factors affect their participation. After discussions with agency officials, we identified the Current Population Survey (CPS) as the data source best suited to answer our research questions. The CPS is a national survey designed and administered jointly by the Census Bureau (Census) and the Department of Labor’s Bureau of Labor Statistics (BLS) and it contains data on individual earnings, as well as poverty rates of families and individuals. CPS is a key source of official government statistics on employment and unemployment in the United States and is the data source for several BLS and Census reports addressing issues similar to those in our objectives. For example, it is used to produce a BLS report on the characteristics of minimum wage workers and a Census report on the supplemental poverty rate. The CPS is conducted on a monthly basis, but different questions are asked in different months during the year. Respondents are surveyed over two separate 4-month periods. Information on hourly wages and other labor force topics are collected on a monthly basis of a sub-sample of respondents. Information on poverty, program participation and income over the prior calendar year is collected annually in the Annual Social and Economic Supplement (ASEC), conducted in March. In consultation with Census officials, we combined information on hourly wages and poverty and program participation, by linking respondents of the ASEC to the months those respondents answered questions about hourly wages (March, April, May, and June). We used the CPS years 1995, 2000, 2005, 2010, 2015 and 2016. Estimates produced from CPS data are subject to sampling error. For all of our estimates we weighted observations based on the monthly weight and generated standard errors under the assumption of with replacement sampling using state as a stratification variable. To the extent possible, we compared our estimates of values published by Census derived from our weighting procedures and standard errors to reported values for that year and found them to be consistent. In addition to estimates, we generated standard errors or the margin of error for the 95 percent confidence interval, and report them with estimates in figures and tables. Based on our data checks, review of documentation and interviews with agency officials, we found the CPS data to be sufficiently reliable for our purposes. However, our method of estimating variance results in standard errors that are relatively conservative; that is, the 95 percent confidence intervals are wider than those resulting from the use of replicate weights. We relied on the monthly CPS information to obtain information about individual hourly wages to determine whether an individual was a low- wage worker. We relied on estimated hourly wages to determine the wage rate of salaried individuals, though in some cases we used reported hourly wages; to estimate hourly wages we used a method provided by BLS economists. This method included observations of (1) workers who reported an hourly wage and (2) salaried workers who reported weekly wages. We included both types of workers in our sample to obtain a broader spectrum of low-wage workers. This method also takes into account potential overtime hours worked and individuals working multiple jobs. We identified three mutually exclusive categories of low-wage workers earning: 110 percent of the federal minimum wage or below (salaried and hourly). This group consisted of workers that earned 110 percent of the federal minimum wage or below (based on the federal minimum wage in each of the years that we reviewed). Above 110 percent of the federal minimum wage to $12.00. This group consisted of workers that earned above110 percent of the federal minimum wage in that year but not more than $12.00 (in constant 2016 dollars). $12.01 to $16.00. This group consisted of workers that earned between $12.01 and $16.00 (in constant 2016 dollars). To define these groups, we only included workers ages 25 to 64—a definition used in prior GAO work on the low-wage workforce. We used this definition to ensure that our sample included workers who were more likely to be independent, out of school, and less likely to be earning a retirement pension. For the groups described above, we reported the following statistics: occupation, industry, whether an individual worked multiple jobs, education level, and total number of hours worked at all jobs. As stated above, we relied on the ASEC to obtain information about the poverty rate and program participation of families. Family type: The unit of analysis within the CPS data was the “family record.” We examined four different family types: (1) married couple families with children; (2) married couple families without children; (3) single-parent families with children; and (4) other families. The “other families” category covers a wide variety of living situations, such as single adults living alone, but does not include married couples or a single-parent living with children. Poverty: We relied on Census’ determination within the ASEC survey to determine whether a family was in poverty. We used two different poverty measures. The official poverty measure measures a family’s resources against a poverty threshold that varies by the number of supported adults and children. However, it excludes certain types of resources, such as in-kind assistance (such as Supplemental Nutrition Assistance Program benefits). We also used the Supplemental Poverty Measure, which is also provided by Census. The Supplemental Poverty Measure includes some in-kind assistance, but also deducts certain expenses such as child care from family resources. In 2016, Census reported that overall, the national rates of poverty are similar based on the two measures. Program participation: We relied on Census’ determination within the ASEC survey to determine whether a family participated in the following federal social safety net programs: EITC, ACTC, Medicaid, SNAP, and TANF cash assistance. Specifically, we measured the use of programs in the following ways: Medicaid enrollment: Anyone in the family enrolled in Medicaid, based on self-report. SNAP participation: The family received SNAP benefits during the prior calendar year, based on self-report. TANF participation: Anyone in the family received TANF cash assistance during the prior calendar year, based on self-report. EITC eligibility: Anyone in the family eligible for EITC receipt during the prior calendar year. Census determines EITC eligibility based on income and family structure. ACTC eligibility: Anyone in the family eligible for ACTC receipt during the prior calendar year. Census determines ACTC eligibility based on income and family structure. An important limitation to our analysis on program participation is that the use of the programs reported by CPS has been noted by researchers to be imprecise. The sources of imprecision are not fully known, and likely depend on the program. In the cases of Medicaid, SNAP, and TANF cash assistance, where benefit receipt is self-reported, CPS data are known to underreport program benefits, perhaps because a stigma is associated with its use. In addition, we reported that the Urban Institute staff found that CPS data captured about 61 percent of TANF cash assistance benefits received and 57 percent of SNAP benefits received in 2012. In the case of EITC and ACTC, Census imputes eligibility for the credits from reported income and other information about the family. According to researchers, in some cases the CPS will overstate usage of the EITC, by imputing the credit to those that do not claim it. In other cases, they will understate usage because they will fail to assign the credit to those that do claim it. However, as noted earlier, we used these data because they were the best available for the analysis we wished to conduct. To examine what is known about the reasons eligible working families do not participate in the five selected federal programs, we conducted a literature review of academic, government, and think tank reports published from 2006 to 2016. We excluded reports that we determined did not have sufficient methodological rigor. To gather examples and current information on factors influencing families’ decisions in a variety of settings, we interviewed researchers and industry groups as well as state and local officials at the selected social safety net programs and community nonprofit organizations that work with low-wage working families. We selected organizations from four metropolitan areas: Atlanta, San Francisco, Santa Fe, and Washington, D.C. The metropolitan areas represent a range of local minimum wage levels relative to the federal minimum wage, costs of living, and participation rates in the selected social safety net programs. We interviewed one to two state or local government or nonprofit agencies in each of these locations, but did not cover all five programs in each of the four locations. We conducted a content analysis of the reports identified during our literature review and information gained in our interviews to identify factors that applied specifically to families with a low-wage worker. The information we gathered from the literature and interviews is not generalizable, but is used to provide examples of factors affecting working families who are eligible for, but not receiving, assistance from social safety net programs. Margins of error for 95 percent confidence interval (+/-) Margins of error for 95 percent confidence interval (+/-) Margins of error for 95 percent confidence interval (+/-) Margins of error for 95 percent confidence interval (+/-) The following table presents the estimated total number of families with a worker ages 25 to 64 and the estimated number of these families in poverty. The table provides estimates based on the type of the worker’s family type and hourly wage. As discussed in appendix I, to develop these estimates, we merged multiple months of Current Population Survey (CPS) survey data with data from the Annual Social and Economic Supplement (ASEC) survey to CPS to estimate poverty among families with low-wage workers. When we performed this procedure, the match rate between the datasets in each year was at least 90 percent, but varied by year. As a result, the estimates of the populations included in the table below may underestimate the actual number of families in poverty by as much as 10 percent. Because the extent of underestimation varied by year, conclusions based on comparisons of the estimates across years should be avoided. In addition, the margin of error was larger than the estimated number in many cases, which limited what we could report. Specifically, we did not report the number of families with incomes less than 50 percent of the poverty threshold. We did report estimates of the percentage of families with incomes less than 50 percent, by family type. Table 5 provides the estimated numbers for this group, with the margins of error that were not included in the body of the report. In addition to those named above, Kimberley Granger and Benjamin Bolitzer, Assistant Directors; Andrea Dawson and Jonathan S. McMurray, Analysts-in-Charge; Brittni Milam, Michael Naretta, Anna Maria Ortiz, Rhiannon Patterson, and Amanda Pritchard made key contributions to this report. Also contributing to this report were Susan Aschoff, Rachel Frisk, Alexander Galuten, Grant Mallie, Joel Marus, Sheila McCoy, Jean McSween, Mimi Nguyen, Jessica Nierenberg, Michelle Rosenberg, and Almeta Spencer.
According to the Department of Labor, private-sector employers have added millions of jobs to the economy since the end of the most recent recession in 2009; however, many are in low-wage occupations. GAO was asked to examine several characteristics of low-wage workers and their families, including their use of federally funded social safety net programs over time. This report answers the following questions: (1) What are the characteristics of the low-wage workforce and how have they changed over time? (2) To what extent are families with low-wage workers in poverty? and (3) To what extent do families with low-wage workers participate in selected social safety net programs and what factors affect their participation? GAO analyzed CPS data from 1995, 2000, 2005, 2010, 2015, and 2016 on worker characteristics, family poverty, and participation in social safety net programs. GAO defined low-wage workers as those workers ages 25 to 64 earning $16 or less per hour. In addition, GAO interviewed officials with state and local social safety net programs and other experts in four metropolitan areas—Atlanta, San Francisco, Santa Fe, and Washington, D.C.—representing a range of local minimum wage levels relative to the federal minimum wage, costs of living, and participation rates in five selected federally funded social safety net programs. According to GAO's analysis of data in the Census Bureau's Current Population Survey (CPS), on average, low-wage workers worked fewer hours per week, were more highly concentrated in a few industries and occupations, and had lower educational attainment than workers earning hourly wages above $16 in each year GAO reviewed—1995, 2000, 2005, 2010, 2015 and 2016. Their percentage of the U.S. workforce also stayed relatively constant over time. About 40 percent of the U.S. workforce ages 25 to 64 earned hourly wages of $16 or less (in constant 2016 dollars) over the period 1995 through 2016. The combination of low wages and few hours worked compounded the income disadvantage of low-wage workers and likely contributed to their potential eligibility for federal social safety net programs. About 20 percent of families with a worker earning up to the federal minimum wage (currently $7.25 per hour), 13 percent of families with a worker earning above federal minimum wage to $12.00 per hour, and 5 percent of families with a worker earning $12.01 to $16 per hour were in poverty in each year GAO reviewed (see figure).The extent of poverty varied considerably by the type of family in which a worker lived. For example, single-parent families earning the federal minimum wage or below comprised a higher percentage of families in poverty. In contrast, married families with no children comprised the lowest percentage of families in poverty, and generally had family incomes at or above the poverty line. Note: All references to the “federal minimum wage” are based on 110 percent of the hourly federal minimum wage in effect that year or the equivalent hourly calculated wage for salaried workers. Brackets are used to represent margins of error of estimated percentages at a 95 percent confidence level. Families with a worker earning $16 or less per hour consistently used selected federally funded social safety net programs between 2005 and 2016, with varied factors affecting eligible families' participation. GAO estimated that the percentage of these families enrolled in Medicaid rose significantly over the past 2 decades, almost tripling among families with a worker earning more than the federal minimum wage between 1995 and 2016. In contrast, an estimated 5 percent or less of these families received cash assistance from the Temporary Assistance for Needy Families (TANF) program at least once in the prior calendar year from 1995 through 2016. A low-wage worker's family type also influenced the extent that families used selected social safety net programs. For example, among families with minimum wage earners in 2016, GAO estimated that about half or more married families used none of the programs GAO examined—Medicaid, TANF, Supplemental Nutrition Assistance Program, Earned Income Tax Credit, and Additional Child Tax Credit—while more than half of single-parent families used three or more. Program officials and others told GAO that eligible working families may not participate in programs for a variety of reasons, including time needed to apply for benefits, low benefit amounts, and assumed ineligibility.
ORI is an independent group within HHS; its Director reports to the Secretary. Created from a merger of two offices within HHS, ORI’s mission is to oversee and direct PHS research integrity activities, which it does primarily through its handling of scientific misconduct investigations. In fiscal year 1994, ORI had a total operating budget of $4 million and maintained a staff of about 50 employees; currently, it has 43 employees. Although ORI investigates misconduct related to intramural research programs, about three-fourths of its caseload in 1994 related to oversight of extramural integrity reviews conducted by grantee institutions. ORI generally monitors the progress of an extramural investigation and reviews the institution’s final report. ORI also presents the results of misconduct investigations in administrative hearings before the HHS Departmental Appeals Board if ORI’s decisions are challenged. Besides its investigative function, ORI performs other research integrity activities. These efforts include developing model policies and procedures for handling allegations of scientific misconduct; evaluating institutional policies and processes for conducting investigations; investigating whistleblower retaliation complaints; and promoting scientific integrity through educational initiatives and other collaborations with universities, medical schools, and professional societies. Most allegations of scientific misconduct are made directly to the institutions conducting the research. Responding to an allegation involves a two-step process: an inquiry and, if necessary, an investigation. Institutions have the primary responsibility for responding to allegations involving extramural research; ORI’s role in these instances is usually that of reviewing the institution’s investigation report. ORI generally does not review institutional inquiries because an institution is not required to inform ORI that an inquiry is under way nor to submit a report at its conclusion. ORI does, however, review all investigations. Institutions must inform ORI when they begin an investigation and submit a report at its conclusion. ORI reviews the final report, the supporting materials, and the determinations to decide whether the investigation has been performed with sufficient objectivity, thoroughness, and competence. ORI plays a more direct role in responding to scientific misconduct allegations in PHS intramural research programs. It reviews all misconduct inquiries conducted by PHS agencies and conducts all investigations when they are needed. ORI’s handling of intramural scientific misconduct cases can be a complex undertaking that may involve collaborations among ORI staff, other agencies, and institutions performing research. In general, for intramural research allegations, the review process begins when an individual making an allegation (referred to as a complainant) alleges to either ORI or a PHS agency that another researcher (a respondent) committed scientific misconduct. If a misconduct allegation is made to ORI, an investigator within ORI’s Division of Research Investigations (DRI) conducts an initial screening primarily to determine if PHS funding is involved and whether the allegation falls within the PHS definition of scientific misconduct. Allegations that do not meet these criteria result in no action or are referred outside of ORI for consideration. When allegations do fall within PHS’ definition of misconduct, ORI forwards them to the PHS agency that funded the research and directs that agency to conduct a formal inquiry. This involves gathering information—including interviewing the subjects involved—to determine the nature of evidence available to support the allegation. ORI investigators may monitor inquiries and advise PHS agencies on matters such as procedures for sequestering laboratory research notebooks. They often directly assist the agency in sequestering the research data and other evidence. If the results of an inquiry suggest that misconduct may have occurred, ORI then opens a full investigation to determine the existence and magnitude of misconduct. An investigation could involve an extensive review of experiments and other scientific data as well as interviews with all parties involved with the research. The ORI investigator assigned to the case may seek assistance from a staff biostatistician and other in-house experts. Also, ORI may elicit assistance from outside scientists who have expertise in subject areas that ORI staff lack. Investigators produce a written report with findings. The report is reviewed by ORI management, its legal staff, and the respondent before being issued by the ORI Director. For investigations that result in a finding of misconduct, the ORI Director, in combination with the HHS debarring official, determines possible sanctions against the respondent, which may include debarment from receiving federal grant or contract funds for a specified period. ORI developed procedures for handling scientific misconduct cases and implemented them in November 1992. These procedures detail ORI’s process for receiving and assessing misconduct allegations, reviewing PHS agency inquiry reports, conducting investigations, and overseeing extramural investigations. The procedures were developed by a task force, consisting mainly of ORI management (in consultation with officials from PHS agencies) and the HHS Office of the General Counsel and the IG. We compared ORI’s policies and procedures with investigation guidelines established by the President’s Council on Integrity and Efficiency (PCIE). The PCIE guidelines apply to federal government investigations and generally outline issues and procedures for handling matters such as background and security inquiries as well as special investigations requested by any appropriate authority. These standards were established through a collaborative effort of staff from various inspector general offices throughout government. We found ORI’s procedures for handling scientific misconduct cases to be consistent with PCIE standards. Specifically, ORI procedures meet the PCIE standards by containing explicit statements on the qualifications of staff needed to handle investigations; independence required to conduct investigations; due professional care needed for the work; and other qualitative standards, such as planning, executing, and reporting investigation results. ORI investigators handling misconduct cases are scientists with doctoral degrees who were engaged in scientific research prior to their tenure with ORI. They represent varied scientific disciplines, such as biochemistry, genetics, biomedical engineering, and nutritional science. At the time of our review, each investigator had received the introductory investigation course given to most federal law enforcement agents. Supervisory investigators had taken some of the more advanced courses as well. Our assessment of case files confirmed that ORI investigators documented the work performed and followed established procedures in screening allegations and handling misconduct investigations. ORI investigators appeared to be making appropriate decisions as to which allegations did not merit further examination beyond their initial screening. We reviewed ORI case files on 30 allegations made to ORI since June 1993 that were closed without a formal inquiry. We sampled these 30 cases from a universe of 113 such closures. In each case, investigators followed established procedures and appropriately followed up on leads, and logically closed out the screening process. Our interviews with four individuals who had contacted ORI revealed a general satisfaction with ORI’s handling of their allegations or requests for information. For example, a scientist who had asked whether a laboratory chief could take authorship credit for research conducted in his facility told us he accepted ORI’s explanation that his inquiry did not constitute misconduct. The scientist added that the ORI investigator handling the call provided useful information on NIH guidelines for research collaborations. ORI investigators also appeared to have followed established procedures for the 10 investigations we reviewed. However, two limitations on our analysis should be noted. First, at the time of our review, ORI had opened and closed only four intramural investigations since its formation in May 1992. Second, these four investigations did not require investigators to apply sophisticated investigative or scientific techniques. (For example, two of them related to alleged falsification of academic credentials.) The remaining six cases involved possible misconduct in extramural research in nonfederal institutions. In these six cases, ORI’s role was that of oversight, reviewing the institutions’ investigations. We concluded from our review of case files for the four ORI-led investigations that ORI investigators employed appropriate techniques. Specifically, investigators developed investigation plans, interviewed relevant individuals, analyzed scientific data where appropriate, coordinated with other HHS offices, appropriately followed up on leads, and wrote reports with evidence supporting their decisions. ORI investigators also appeared to have followed proper procedures in reviewing the extramural investigations. Our examination of the six extramural case files revealed that ORI investigators adequately documented their work and included relevant documents, such as copies of the inquiry and investigation reports, in case files. We observed from our review of documentation in the case files that investigators generally followed the steps outlined in the ORI procedures manual. For example, investigators made appropriate contacts with institutions and took steps to ensure that the institution conducting the investigation properly notified the complainant and respondent at various stages of the investigation. ORI’s procedures specify time frames for screening allegations and for conducting inquiries and investigations. These procedures state that screening should be completed within 30 days of receipt of the allegation. Inquiries are generally to be completed within 60 days of their initiation and investigations within 120 days. We observed delays in ORI’s handling of misconduct cases. ORI’s inability to close current cases in a timely manner has contributed to a backlog, some of which it inherited from its predecessor offices. When ORI was established, it inherited 70 active cases (inquiries and investigations) and about 420 more allegations which had apparently not been reviewed or screened. Although it has made progress in working through these inherited cases, ORI still has a substantial backlog. On April 30, 1995, ORI reported 169 active cases, including 71 inquiries and investigations. Although ORI completed the initial screening on 208 of the 288 misconduct allegations it received between June 1, 1993, and December 6, 1994, ORI investigators had not completed the screening process for the remaining 80 allegations, even though most of them had been unresolved for more than the 30 days allotted. More importantly, a majority of these (45 of 80) had remained open for over 6 months. Investigators and supervisors we interviewed attributed the backlog to competing work priorities. Our discussions with investigators and analysis of their workload indicated that, generally, investigators are each assigned 6 to 10 allegations to review in addition to their caseload of open investigations, inquiries, and oversight of extramural investigations. Although none of the investigators indicated that the workload was too high, they expressed concern about the backlog of initial allegations. For the four ORI-led investigations we reviewed, ORI went well beyond the targeted 120 days to complete them. Although we could not determine the actual staff time spent on these cases, the elapsed calendar time ranged from about 6 to 13 months. In two instances, investigators took what appeared to be an inordinate amount of time to complete relatively straightforward cases. For example, ORI took over a year to investigate and adjudicate a case of alleged falsification of academic credentials in several NIH grant applications. In another case, ORI took about 6 months for an investigation in which the respondent submitted a statement partially admitting to the misconduct prior to ORI’s opening an investigation. ORI investigators indicated that higher priority cases prevented them from closing these cases more expeditiously. The investigators also gave specific reasons for each case. In the first case, investigators wanted to establish a pattern of falsifying credentials to counter the respondent’s claim that the incident was not common. In the other case, ORI initiated an investigation because it wanted to ensure that appropriate procedures were followed and that the full extent of the respondent’s misconduct was identified. We also observed a lack of timeliness in closing extramural investigations. The six cases we reviewed were open for about 9 to 13 months. The time spent on four of these cases can be partly attributed to additional work ORI did on these cases after the institutions completed their investigations. During the course of our review, ORI officials took various steps to reduce the case backlog and improve ORI’s work. These actions ranged from giving greater attention to setting priorities among cases to providing increased guidance to extramural institutions. Priority Setting—ORI has begun holding frequent management meetings to systematically review all open cases. The point is to decide which cases can be closed and to set priorities among the open cases. Early Settlement Agreements—ORI has also begun to seek earlier resolutions of cases through advance settlements with respondents (generally referred to as voluntarily exclusions). When respondents voluntarily agree to or accept ORI’s early disposition of a case, further pursuit of an investigation or appeal can be avoided. Significant savings in investigative and litigation resources may result. Reassigning Program Analysts—ORI has assigned a program analyst to expedite allegation assessments by performing initial tasks, such as securing research articles and grant information. Managers and investigators indicated that this effort has proven useful and support the increased use of program analysts for this purpose. Guidance to Institutions—In an effort to better educate intramural and extramural institutions on handling scientific misconduct, ORI has instituted formal processes for communicating with these entities. ORI now issues a quarterly newsletter, conducts seminars, and posts notices on an HHS computer bulletin board. Additionally, in November 1994, ORI issued draft model policies and instructions for handling misconduct cases to extramural institutions. In their present form, the guidelines are intended to assist institutions in complying with federal regulations. ORI sent these draft procedures to officials at 40 extramural institutions requesting their review and comment. We interviewed four of these officials, and the consensus was that the draft procedures would have a positive effect by giving institutions improved guidance for investigations. Although these measures appear to have helped ORI improve its handling of cases, additional efforts are needed to more effectively respond to workload demands. Facing a substantial case backlog and lengthy delays in completing its work, ORI needs additional management tools to meet its workload demands. Specifically, ORI still needs strategic planning and resource assessments to decide how to most efficiently and effectively deploy its staff. For example, 11 of ORI’s staff (within DRI) are directly involved in investigations full time. The remaining 32 staff members (about 75 percent of total staff) are either professional or administrative staff who support DRI or are devoted to other ORI functions, such as policy development and education. Investigative work is not ORI’s only responsibility. Given the case backlog, however, ORI’s current staff allocation to investigations may not be sufficient even with the recent improvements ORI has made. ORI also needs a system to track the amount of time investigators spend on cases. Generally, each investigator handles 6 to 10 initial allegations of misconduct, 1 to 3 investigations, and 1 to 4 oversight cases. Some investigators we interviewed expressed occasional uncertainty about whether their use of time coincided with management’s priorities. Planning processes, such as routine staffing assessments, could help ORI’s management team systematically gauge the appropriate balance between ORI’s needs and resources. Staffing assessments might also help identify ways to augment ORI’s skill base—for example, identifying the need for different disciplines and backgrounds among the staff, such as trained criminal investigators. Such assessments might also help management determine ways to better use its administrative staff. The HHS IG reached a similar conclusion in its November 1994 report on ORI’s staffing and management. The IG recommended that ORI develop a strategic plan to help it “be better prepared to handle fluctuations in its work load and to provide a balance between its roles in stewardship and research integrity education.” The plan, according to the IG, should detail objectives in specific, measurable terms and show how resources and staff should be allocated to accomplish these objectives. The IG’s report made a number of other recommendations designed to improve ORI’s productivity. Another deficiency noted in the IG’s report was the absence of a structured timekeeping system. The report concluded that implementing such a system would greatly aid in determining whether ORI needs additional investigative staff. The IG recommended that ORI set and enforce performance measures for its staff regarding the quality, quantity, and timeliness of work conducted. Our work supports the IG’s conclusion that ORI needs a strategic plan and specific performance measures for its staff. Such a plan—particularly if it includes (1) a comprehensive assessment of ORI’s workload and staffing requirements and (2) measures to reduce the case backlog and close cases more quickly—should help ensure an optimum use of resources. Among its fiscal year 1995 management initiatives, ORI has started work on a strategic plan and will begin setting specific performance measures. Additionally, ORI officials told us they had initiated a two-pronged pilot study for tracking investigators’ time. One part of the pilot requires investigators to track time spent on an investigation. The second part requires investigators to record the time they devote to the specific tasks they perform, such as interviewing and analyzing research experiments, in addition to the total time spent. Since its inception, ORI has made progress in improving its handling of scientific misconduct cases. By continuing to follow sound investigative procedures and striving to improve its handling of cases, the office will gain increased public trust as a preserver of federal interest in biomedical research. However, persistent delays in case handling and deficiencies in its management systems are barriers that ORI needs to overcome if it is to effectively fulfill its mission in the future. ORI’s management team must confront these challenges and develop strategies to address them. HHS provided comments on a draft of this report, which we incorporated where appropriate (see app. II). HHS generally agreed with our findings and representation of its current efforts to improve productivity. HHS also described planned efforts to reduce the “management superstructure of ORI,” which should result in productivity gains. We incorporated technical comments provided by HHS, but did not include them in the appendix. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to interested parties and make copies available to others on request. Please call me on (202) 512-7119 if you or your staff have any questions about this report. Other major contributors are listed in appendix III. To assess ORI’s process for handling misconduct cases, we reviewed its written guidance and examined how it screens allegations and conducts investigations and oversight functions. We compared ORI’s written policies and procedures for handling misconduct allegations and investigations with guidelines established for federal agencies that engage in comparable activities. In examining how ORI handles and screens misconduct allegations, we reviewed case files for 30 of the almost 300 allegations received from June 1993 to December 6, 1994. We selected cases that did not proceed to the inquiry phase. For four of these cases, we interviewed the individuals who made the allegations to obtain their perspectives on how well ORI handled them. We selected these particular individuals primarily because their case files did not contain sufficient information for us to determine whether ORI had completed its work responding to the allegations. To assess ORI procedures for conducting and monitoring misconduct investigations, we reviewed the 10 investigations that were opened since ORI’s establishment in May 1992 and completed by the time of our review. ORI conducted 4 of the 10 investigations; the remaining 6 were done by institutions and reviewed by ORI. We did not review cases initiated and conducted primarily by ORI’s predecessor offices because ORI had not implemented its current investigation procedures when these cases were opened. In addition, we neither independently verified the information ORI investigators used to reach their conclusions nor conducted our own investigation of cases. We supplemented our reviews of ORI case files with interviews with the seven investigators, two supervisory investigators, and the DRI Acting Director. We primarily sought to further our understanding of the investigative techniques used in handling misconduct cases, particularly the cases that presented greater technical challenges for investigators. As part of our interviews, we discussed procedures being used for cases currently under review. We interviewed officials at intramural and extramural institutions to gain their perspectives on ORI guidance for handling misconduct and on the quality of ORI investigations. We sought to obtain their views on ways in which ORI could improve its handling of misconduct cases. We also analyzed ORI’s automated case tracking system, which contains misconduct allegations. Finally, we interviewed ORI’s Deputy Director and the DRI Acting Director to ascertain current strategies to improve misconduct case management. We did not independently verify the accuracy of the data in ORI case files or automated databases. We did our work between July 1994 and April 1995 in accordance with generally accepted government auditing standards. Barry Tice, Assistant Director, (202) 512-4552 Glenn Davis, Evaluator-in-Charge, (312) 220-7600 Fred Chasnov Woodrow Hunt Cameo Zola The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO determined whether the Department of Health and Human Services' (HHS) Office of Research Integrity (ORI): (1) has the appropriate policies, procedures, and investigative practices for handling misconduct allegations in a timely; and (2) has any staffing issues that may adversely affect ORI responsiveness. GAO found that: (1) ORI has developed and implemented procedures for handling misconduct cases by assessing the qualifications of its investigative staff, the level of independence and professional care needed to conduct investigations, and other qualitative standards for planning, executing, and reporting investigation results; (2) the techniques ORI uses in handling misconduct cases raises a few concerns; (3) despite ORI success in implementing procedures for handling misconduct cases, it continues to experience delays in closing cases; (4) ORI needs a comprehensive assessment of its resources since it faces a substantial case backlog; and (5) ORI has initiated a number of actions to improve productivity and plans to refine its planning processes during fiscal year 1995.
To determine which federal government programs and functions should be added to the High Risk List, we consider whether the program or function is of national significance or is key to government performance and accountability. Further, we consider qualitative factors, such as whether the risk involves public health or safety, service delivery, national security, national defense, economic growth, or privacy or citizens’ rights, or could result in significant impaired service, program failure, injury or loss of life, or significantly reduced economy, efficiency, or effectiveness. In addition, we also review the exposure to loss in quantitative terms such as the value of major assets being impaired, revenue sources not being realized, or major agency assets being lost, stolen, damaged, or wasted. We also consider corrective measures planned or under way to resolve a material control weakness and the status and effectiveness of these actions. This year, we added two new areas, delineated below, to the High Risk List based on those criteria. In response to serious and long-standing problems with veterans’ access to care, which were highlighted in a series of congressional hearings in the spring and summer of 2014, Congress enacted the Veterans Access, Choice, and Accountability Act of 2014 (Pub. L. No. 113-146, 128 Stat. 1754), which provides $15 billion in new funding for Department of Veterans Affairs (VA) health care. Generally, this law requires VA to offer veterans the option to receive hospital care and medical services from a non-VA provider when a VA facility cannot provide an appointment within 30 days, or when veterans reside more than 40 miles from the nearest VA facility. Under the law, VA received $10 billion to cover the expected increase in utilization of non-VA providers to deliver health care services to veterans. The $10 billion is available until expended and is meant to supplement VA’s current budgetary resources for medical care. Further, the law appropriated $5 billion to increase veterans’ access to care by expanding VA’s capacity to deliver care to veterans by hiring additional clinicians and improving the physical infrastructure of VA’s facilities. It is therefore critical that VA ensures its resources are being used in a cost- effective manner to improve veterans’ timely access to health care. We have categorized our concerns about VA’s ability to ensure the timeliness, cost-effectiveness, quality, and safety of the health care the department provides into five broad areas: (1) ambiguous policies and inconsistent processes, (2) inadequate oversight and accountability, (3) information technology challenges, (4) inadequate training for VA staff, and (5) unclear resource needs and allocation priorities. We have made numerous recommendations that aim to address weaknesses in VA’s management of its health care system—more than 100 of which have yet to be fully resolved. For example, to ensure that its facilities are carrying out processes at the local level more consistently—such as scheduling veterans’ medical appointments and collecting data on veteran suicides— VA needs to clarify its existing policies. VA also needs to strengthen oversight and accountability across its facilities by conducting more systematic, independent assessments of processes that are carried out at the local level, including how VA facilities are resolving specialty care consults, processing claims for non-VA care, and establishing performance pay goals for their providers. We also have recommended that VA work with the Department of Defense (DOD) to address the administrative burdens created by the lack of interoperability between their two IT systems. A number of our recommendations aim to improve training for staff at VA facilities, to address issues such as how staff are cleaning, disinfecting, and sterilizing reusable medical equipment, and to more clearly align training on VA’s new nurse staffing methodology with the needs of staff responsible for developing nurse staffing plans. Finally, we have recommended that VA improve its methods for identifying VA facilities’ resource needs and for analyzing the cost-effectiveness of VA health care. The recently enacted Veterans Access, Choice, and Accountability Act included a number of provisions intended to help VA address systemic weaknesses. For example, the law requires VA to contract with an independent entity to (1) assess VA’s capacity to meet the current and projected demographics and needs of veterans who use the VA health care system, (2) examine VA’s clinical staffing levels and productivity, and (3) review VA’s IT strategies and business processes, among other things. The new law also establishes a 15-member commission, to be appointed primarily by bipartisan congressional leadership, which will examine how best to organize the VA health care system, locate health care resources, and deliver health care to veterans. It is critical for VA leaders to act on the findings of this independent contractor and congressional commission, as well as on those of VA’s Office of the Inspector General, GAO, and others, and to fully commit themselves to developing long-term solutions that mitigate risks to the timeliness, cost- effectiveness, quality, and safety of the VA health care system. It is also critical that Congress maintains its focus on oversight of VA health care. In the spring and summer of 2014, congressional committees held more than 20 hearings to address identified weaknesses in the VA health care system. Sustained congressional attention to these issues will help ensure that VA continues to make progress in improving the delivery of health care services to veterans. We plan to continue monitoring VA’s efforts to improve the timeliness, cost-effectiveness, quality, and safety of veterans’ health care. To this end, we have ongoing work focusing on topics such as veterans’ access to primary care and mental health services; primary care productivity; nurse recruitment and retention; monitoring and oversight of VA spending on training programs for health care professionals; mechanisms VA uses to monitor quality of care; and VA and DOD investments in Centers of Excellence—which are intended to produce better health outcomes for veterans and service members. Although the executive branch has undertaken numerous initiatives to better manage the more than $80 billion that is annually invested in information technology (IT), federal IT investments too frequently fail or incur cost overruns and schedule slippages while contributing little to mission-related outcomes. We have previously testified that the federal government has spent billions of dollars on failed IT investments. These and other failed IT projects often suffered from a lack of disciplined and effective management, such as project planning, requirements definition, and program oversight and governance. In many instances, agencies have not consistently applied best practices that are critical to successfully acquiring IT investments. We have identified nine critical factors underlying successful major acquisitions that support the objective of improving the management of large-scale IT acquisitions across the federal government: (1) program officials actively engaging with stakeholders; (2) program staff having the necessary knowledge and skills; (3) senior department and agency executives supporting the programs; (4) end users and stakeholders involved in the development of requirements; (5) end users participating in testing of system functionality prior to end user acceptance testing; (6) government and contractor staff being stable and consistent; (7) program staff prioritizing requirements; (8) program officials maintaining regular communication with the prime contractor; and (9) programs receiving sufficient funding. While there have been numerous executive branch initiatives aimed at addressing these issues, implementation has been inconsistent. Over the past 5 years, we have reported numerous times on shortcomings with IT acquisitions and operations and have made about 737 related recommendations, 361 of which were to the Office of Management and Budget (OMB) and agencies to improve the implementation of the recent initiatives and other government-wide, cross-cutting efforts. As of January 2015, about 23 percent of the 737 recommendations had been fully implemented. Given the federal government’s continued experience with failed and troubled IT projects, coupled with the fact that OMB initiatives to help address such problems have not been fully implemented, the government will likely continue to produce disappointing results and will miss opportunities to improve IT management, reduce costs, and improve services to the public, unless needed actions are taken. Further, it will be more difficult for stakeholders, including Congress and the public, to monitor agencies’ progress and hold them accountable for reducing duplication and achieving cost savings. Recognizing the severity of issues related to government-wide management of IT, in December 2014 the Federal Information Technology Acquisition Reform provisions were enacted as a part of the Carl Levin and Howard P. ‘Buck’ McKeon National Defense Authorization Act for Fiscal Year 2015. I want to acknowledge the leadership of this Committee and the Senate Committee on Homeland Security and Governmental Affairs in leading efforts to enact this important legislation. To help address the management of IT investments, OMB and federal agencies should expeditiously implement the requirements of the December 2014 statutory provisions promoting IT acquisition reform.Doing so should (1) improve the transparency and management of IT acquisitions and operations across the government, and (2) strengthen the authority of chief information officers to provide needed direction and oversight. To help ensure that these improvements are achieved, congressional oversight of agencies’ implementation efforts is essential. Beyond implementing the recently enacted law, OMB and agencies need to continue to implement our previous recommendations in order to improve their ability to effectively and efficiently invest in IT. Several of these are critical, such as conducting TechStat reviews for at-risk investments, updating the public version of the IT Dashboard throughout the year, developing comprehensive inventories of federal agencies’ software licenses. To ensure accountability, OMB and agencies should also demonstrate measurable government-wide progress in the following key areas: OMB and agencies should, within 4 years, implement at least 80 percent of our recommendations related to the management of IT acquisitions and operations. Agencies should ensure that a minimum of 80 percent of the government’s major acquisitions should deliver functionality every 12 months. Agencies should achieve no less than 80 percent of the over $6 billion in planned PortfolioStat savings and 80 percent of the more than $5 billion in savings planned for data center consolidation. In the 2 years since the last high-risk update, two areas have expanded in scope. Enforcement of Tax Laws has been expanded to include IRS’s efforts to address tax refund fraud due to identity theft. Ensuring the Security of Federal Information Systems and Cyber Critical Infrastructure has been expanded to include the federal government’s protection of personally identifiable information and is now called Ensuring the Security of Federal Information Systems and Cyber Critical Infrastructure and Protecting Personally Identifiable Information (PII). Since 1990, we have designated one or more aspects of Enforcement of Tax Laws as high risk. The focus of the Enforcement of Tax Laws high- risk area is on the estimated $385 billion net tax gap—the difference between taxes owed and taxes paid—and IRS’s and Congress’s efforts to address it. Given current and emerging risks, we are expanding the Enforcement of Tax Laws area to include IRS’s efforts to address tax refund fraud due to identity theft (IDT), which occurs when an identity thief files a fraudulent tax return using a legitimate taxpayer’s identifying information and claims a refund. While acknowledging that the numbers are uncertain, IRS estimated paying about $5.8 billion in fraudulent IDT refunds while preventing $24.2 billion during the 2013 tax filing season. While there are no simple solutions to combating IDT refund fraud, we have identified various options that could help, some of which would require legislative action. Because some of these options represent a significant change to the tax system that could likely burden taxpayers and impose significant costs to IRS for systems changes, it is important for IRS to assess the relative costs and benefits of the options. This assessment will help ensure an informed discussion among IRS and relevant stakeholders—including Congress—on the best option (or set of options) for preventing IDT refund fraud. Since 1997, we have designated the security of our federal cyber assets as a high-risk area. In 2003, we expanded this high-risk area to include the protection of critical cyber infrastructure. The White House and federal agencies have taken steps toward improving the protection of our cyber assets. However, advances in technology which have dramatically enhanced the ability of both government and private sector entities to collect and process extensive amounts of Personally Identifiable Information (PII) pose challenges to ensuring the privacy of such information. The number of reported security incidents involving PII at federal agencies has increased dramatically in recent years. In addition, high-profile PII breaches at commercial entities have heightened concerns that personal privacy is not being adequately protected. Finally, both federal agencies and private companies collect detailed information about the activities of individuals–raising concerns about the potential for significant erosion of personal privacy. We have suggested, among other things, that Congress consider amending privacy laws to cover all PII collected, used, and maintained by the federal government and recommended that the federal agencies we reviewed take steps to protect personal privacy and improve their responses to breaches of PII. For these reasons, we added the protection of privacy to this high-risk area this year. Our experience with the high-risk series over the past 25 years has shown that five broad elements are essential to make progress.criteria for removal are as follows: Leadership commitment. Demonstrated strong commitment and top leadership support. Capacity. Agency has the capacity (i.e., people and resources) to resolve the risk(s). Action plan. A corrective action plan exists that defines the root cause and solutions and that provides for substantially completing corrective measures, including steps necessary to implement solutions we recommended. Monitoring. A program has been instituted to monitor and independently validate the effectiveness and sustainability of corrective measures. Demonstrated progress. Ability to demonstrate progress in implementing corrective measures and in resolving the high-risk area. These five criteria form a road map for efforts to improve and ultimately address high-risk issues. Addressing some of the criteria leads to progress, while satisfying all of the criteria is central to removal from the list. Figure 1 shows the five criteria and examples of actions taken by agencies to address the criteria. Throughout my statement and in our high-risk update report, we have detailed many actions taken to address the high-risk areas aligned with the five criteria as well as additional steps that need to be addressed. In each of our high-risk updates, for more than a decade, we have assessed progress to address the five criteria for removing the high-risk areas from the list. In this high-risk update, we are adding additional clarity and specificity to our assessments by rating each high-risk area’s progress on the criteria, using the following definitions: Met. Actions have been taken that meet the criterion. There are no significant actions that need to be taken to further address this criterion. Partially met. Some, but not all, actions necessary to meet the criterion have been taken. Not met. Few, if any, actions towards meeting the criterion have been taken. Figure 2 is a visual representation of varying degrees of progress in each of the five criteria for a high-risk area. Each point of the star represents one of the five criteria for removal from the High Risk List and each ring represents one of the three designations: not met, partially met, or met. The progress ratings used to address the high-risk criteria are an important part of our efforts to provide greater transparency and specificity to agency leaders as they seek to address high-risk areas. Beginning in the spring of 2014 leading up to this high-risk update, we met with agency leaders across government to discuss preliminary progress ratings. These meetings focused on actions taken and on additional actions that need to be taken to address the high-risk issues. Several agency leaders told us that the additional clarity provided by the progress rating helped them better target their improvement efforts. Since our last high-risk update in 2013, there has been solid and steady progress on the vast majority of the 30 high-risk areas from our 2013 list. Progress has been possible through the concerted actions and efforts of Congress and the leadership and staff in agencies and OMB. As shown in table 1, 18 high-risk areas have met or partially met all criteria for removal from the list; 11 of these areas also fully met at least one criterion. Of the 11 areas that have been on the High Risk List since the 1990s, 7 have at least met or partially met all of the criteria for removal and 1 area—DOD Contract Management—is 1 of the 2 areas that has made enough progress to remove subcategories of the high-risk area. Overall, 28 high- risk areas were rated against the five criteria, totaling a possible 140 high- risk area criteria ratings. Of these, 122 (or 87 percent) were rated as met or partially met. On the other hand, 13 of the areas have not met any of the five criteria; 3 of those—DOD Business Systems Modernization, DOD Support Infrastructure Management, and DOD Financial Management— have been on the High Risk List since the 1990’s. Throughout the history of the high-risk program, Congress played an important role through its oversight and (where appropriate) through legislative action targeting both specific problems and the high-risk areas overall. Since our last high-risk report, several high-risk areas have received congressional oversight and legislation needed to make progress in addressing risks. Table 2 provides examples of congressional actions and of high-level administration initiatives—discussed in more detail throughout our report—that have led to progress in addressing high-risk areas. Additional congressional actions and administrative initiatives are also included in the individual high-risk areas discussed in this report. Since our 2013 update, sufficient progress has been made to narrow the scope of the following two areas. Our work has identified the following high-risk issues related to the Food and Drug Administration’s (FDA) efforts to oversee medical products: (1) oversight of medical device recalls, (2) implementation of the Safe Medical Devices Act of 1990, (3) the effects of globalization on medical product safety, and (4) shortages of medically necessary drugs. We added the oversight of medical products to our High Risk List in 2009. Since our 2013 high-risk update, FDA has made substantial progress addressing the first two areas; therefore, we have narrowed this area to remove these issues from our High Risk List. However, the second two issues, globalization and drug shortages, remain pressing concerns. FDA has greatly improved its oversight of medical device recalls by fully implementing all of the recommendations made in our 2011 report on this topic. Recalls provide an important tool to mitigate serious health consequences associated with defective or unsafe medical devices. We found that FDA had not routinely analyzed recall data to determine whether there are systemic problems underlying trends in device recalls. We made specific recommendations to the agency that it enhance its oversight of recalls. FDA is fully implementing our recommendations and has developed a detailed action plan to improve the recall process, analyzed 10 years of medical device recall trend data, and established explicit criteria and set thresholds for determining whether recalling firms have performed effective corrections or removals of defective products. These actions have addressed this high-risk issue. The Safe Medical Devices Act of 1990 requires FDA to determine the appropriate process for reviewing certain high-risk devices—either reclassifying certain high-risk medical device types to a lower-risk class or establishing a schedule for such devices to be reviewed through its most stringent premarket approval process. We found that FDA’s progress was slow and that it had never established a timetable for its reclassification or re-review process. As a result, many high-risk devices—including device types that FDA has identified as implantable, life sustaining, or posing a significant risk to the health, safety, or welfare of a patient—still entered the market through FDA’s less stringent premarket review process. We recommended that FDA expedite its implementation of the act. Since then, FDA has made good progress and began posting the status of its reviews on its website. FDA has developed an action plan with a goal of fully implementing the provisions of the act by the second quarter of calendar year 2015. While FDA has more work to do, it has made sufficient progress to address this high-risk issue. Based on our reviews of DOD’s contract management activities over many years, we placed this area on our High Risk List in 1992. For the past decade, our work and that of others has identified challenges DOD faces within four segments of contract management: (1) the acquisition workforce, (2) contracting techniques and approaches, (3) service acquisitions, and (4) operational contract support. DOD has made sufficient progress in one of the four segments—its management and oversight of contracting techniques and approaches—to warrant its removal as a separate segment within the overall DOD contract management high-risk area. Significant challenges still remain in the other three segments. We made numerous recommendations to address the specific issues we identified. DOD leadership has generally taken actions to address our recommendations. For example, DOD promulgated regulations to better manage its use of time-and-materials contracts and undefinitized contract actions (which authorize contractors to begin work before reaching a final agreement on contract terms). In addition, OMB directed agencies to take action to reduce the use of noncompetitive and time-and-materials contracts. Similarly, Congress has enacted legislation to limit the length of noncompetitive contracts and require DOD to issue guidance to link award fees to acquisition outcomes. Over the past several years, DOD’s top leadership has taken significant steps to plan and monitor progress in the management and oversight of contracting techniques and approaches. For example, through its Better Buying Power initiatives DOD leadership identified a number of actions to promote effective competition and to better utilize specific contracting techniques and approaches. In that regard, in 2010 DOD issued a policy containing new requirements for competed contracts that received only one offer—a situation OMB has noted deprives agencies of the ability to consider alternative solutions in a reasoned and structured manner and which DOD has termed “ineffective competition.” These changes were codified in DOD’s acquisition regulations in 2012. In May 2014, we concluded that DOD’s regulations help decrease some of the risks of one offer awards, but also that DOD needed to take additional steps to continue to enhance competition, such as establishing guidance for when contracting officers should assess and document the reasons only one offer was received. DOD concurred with the two recommendations we made in our report and has since implemented one of them. DOD also has been using its Business Senior Integration Group (BSIG)— an executive-level leadership forum—for providing oversight in the planning, execution, and implementation of these initiatives. In March 2014, the Director of the Office of Defense Procurement and Acquisition Policy presented an assessment of DOD competition trends that provided information on competition rates across DOD and for selected commands within each military department and proposed specific actions to improve competition. The BSIG forum provides a mechanism by which DOD can address ongoing and emerging weaknesses in contracting techniques and approaches and by which DOD can monitor the effectiveness of its efforts. Further, in June 2014, DOD issued its second annual assessment of the performance of the defense acquisition system. The assessment, included data on the system’s competition rate and goals, assessments of the effect of contract type on cost and schedule control, and the impact of competition on the cost of major weapon systems. An institution as large, complex, and diverse as DOD, and one that obligates hundreds of billions of dollars under contracts each year, will continue to face challenges with its contracting techniques and approaches. We will maintain our focus on identifying these challenges and proposing solutions. However, at this point DOD’s continued commitment and demonstrated progress in this area—including the establishment of a framework by which DOD can address ongoing and emerging issues associated with the appropriate use of contracting techniques and approaches—provide a sufficient basis to remove this segment from the DOD contract management high-risk area. In addition to the two areas that we narrowed—Protecting Public Health through Enhanced Oversight of Medical Products and DOD Contract Management—nine other areas met at least one of the criteria for removal from the High Risk List and were rated at least partially met for all four of the remaining criteria. These areas serve as examples of solid progress made to address high-risk issues through implementation of our recommendations and through targeted corrective actions. Further, each example underscores the importance of high-level attention given to high- risk areas within the context of our criteria by the administration and by congressional action. To sustain progress in these areas and to make progress in other high-risk areas—including eventual removal from the High Risk List—focused leadership attention and ongoing oversight will be needed. The National Aeronautics and Space Administration’s (NASA) acquisition management was included on the original High Risk List in 1990. NASA’s continued efforts to strengthen and integrate its acquisition management functions have resulted in the agency meeting three criteria for removal from our High Risk List—leadership commitment, a corrective action plan, and monitoring. For example, NASA has completed the implementation of its corrective action plan, which was managed by the Deputy Administrator, with the Chief Engineer, the Chief Financial Officer, and the agency’s Associate Administrator having led implementation of the The plan identified metrics to assess the progress of individual initiatives.implementation, which NASA continues to track and report semi-annually. These metrics include cost and schedule performance indicators for NASA’s major development projects. We have found that NASA’s performance metrics generally reflect improved performance. For example, average cost and schedule growth for NASA’s major projects has declined since 2011 and most of NASA’s major projects are tracking metrics, which we recommended in 2011 to better assess design stability and decrease risk. In addition, NASA has taken action in response to our recommendations to improve the use of earned value management—a tool designed to help project managers monitor progress—such as by conducting a gap analysis to determine whether each center has the requisite skills to effectively utilize earned value management. These actions have helped NASA to create better baseline estimates and track performance so that NASA has been able to launch more projects on time and within cost estimates. However, we found that NASA needs to continue its efforts to increase agency capacity to address ongoing issues through additional guidance and training of personnel. Such efforts should help maximize improvements and demonstrate that the improved cost and schedule performance will be sustained, even for the agency’s most expensive and complex projects. Recently, a few of NASA’s major projects are rebaselining their cost, schedule, or both in light of management and technical issues, which is tempering the progress of the whole portfolio. In addition, several of NASA’s largest and most complex projects, such as NASA’s human spaceflight projects, are at critical points in implementation. We have reported on several challenges that may further impact NASA’s ability to demonstrate progress in improving acquisition management. The federal government has made significant progress in promoting the sharing of information on terrorist threats since we added this issue to the High Risk List in 2003. As a result, the federal government has met our criteria for leadership commitment and capacity and has partially met the remaining criteria for this high-risk area. Significant progress was made in this area by developing a more structured approach to achieving the Information Sharing Environment (Environment) and by defining the highest priority initiatives to accomplish. In December 2012, the President released the National Strategy for Information Sharing and Safeguarding (Strategy), which provides guidance on the implementation of policies, standards, and technologies that promote secure and responsible national security information sharing. In 2013, in response to the Strategy, the Program Manager for the Environment released the Strategic Implementation Plan for the National Strategy for Information Sharing and Safeguarding (Implementation Plan). The Implementation Plan provides a roadmap for the implementation of the priority objectives in the Strategy. The Implementation Plan also assigns stewards to coordinate each priority objective—in most cases, a senior department official—and provides time frames and milestones for achieving the outcomes in each objective. Adding to this progress is the work the Environment has done to address our previous recommendations. In our 2011 report on the Environment, we recommended that key departments better define incremental costs for information sharing activities and establish an enterprise architecture management plan. Since then, senior officials in each key department reported that any incremental costs related to implementing the Environment are now embedded within each department’s mission activities and operations and do not require separate funding. Further, the 2013 Implementation Plan includes actions for developing aspects of an architecture for the Environment. In 2014, the program manager issued the Information Interoperability Framework, which begins to describe key elements intended to help link systems across departments to enable information sharing. Going forward, in addition to maintaining leadership commitment and capacity, the program manager and key departments will need to continue working to address remaining action items informed by our five high-risk criteria, thereby helping to reduce risks and enhance the sharing and management of terrorism-related information. The Department of Homeland Security (DHS) has continued efforts to strengthen and integrate its management functions since those issues were placed on the High Risk List in 2003. These efforts resulted in the department meeting two criteria for removal from the High Risk List (leadership commitment and a corrective action plan) and partially meeting the remaining three criteria (capacity, a framework to monitor progress, and demonstrated, sustained progress). DHS’s top leadership, including the Secretary and Deputy Secretary of Homeland Security, have continued to demonstrate exemplary commitment and support for addressing the department’s management challenges. For instance, the Department’s Under Secretary for Management and other senior management officials have routinely met with us to discuss the department’s plans and progress, which helps ensure a common understanding of the remaining work needed to address our high-risk designation. In April 2014, the Secretary of Homeland Security issued Strengthening Departmental Unity of Effort, a memorandum committing the agency to, among other things, improving DHS’s planning, programming, budgeting, and execution processes through strengthened departmental structures and increased capability. In addition, DHS has continued to provide updates to the report Integrated Strategy for High Risk Management, demonstrating a continued focus on addressing its high-risk designation. The integrated strategy includes key management initiatives and related corrective action plans for achieving 30 actions and outcomes, which we identified and DHS agreed are critical to addressing the challenges within the department’s management areas and to integrating those functions across the department. Further, DHS has demonstrated progress to fully address nine of these actions and outcomes, five of which it has sustained as fully implemented for at least 2 years. For example, DHS fully addressed two outcomes because it received a clean audit opinion on its financial statements for 2 consecutive fiscal years, 2013 and 2014. In addition, the department strengthened its enterprise architecture program (or technology blueprint) to guide IT acquisitions by, among other things, largely addressing our prior recommendations aimed at adding needed architectural depth and breadth. DOD supply chain management is one of the six issues that has been on the High Risk List since 1990. DOD has made progress in addressing weaknesses in all three dimensions of its supply chain management areas: inventory management, materiel distribution, and asset visibility. With respect to inventory management, DOD has demonstrated considerable progress in implementing its statutorily mandated corrective action plan. This plan is intended to reduce excess inventory and improve inventory management practices. Additionally, DOD has established a performance management framework, including metrics and milestones, to track the implementation and effectiveness of its corrective action plan and has demonstrated considerable progress in reducing its excess inventory and improving its inventory management. For example, DOD reported that its percentage of on-order excess inventory dropped from 9.5 percent in fiscal year 2009 to 7.9 percent in fiscal year 2013. DOD calculates the percentage by dividing the amount of on-order excess inventory by the total amount of on-order inventory. In response to our 2012 recommendations on the implementation of the plan, DOD continues to re-examine its goals for reducing excess inventory, has revised its goal for reducing on-hand excess inventory (it achieved its original goal early), and is in the process of institutionalizing its inventory management metrics in policy. DOD has also made progress in addressing its materiel distribution challenges. Specifically, DOD has implemented, or is implementing, distribution-related initiatives that could serve as a basis for a corrective action plan. For example, DOD developed its Defense Logistics Agency Distribution Effectiveness Initiative, formerly called Strategic Network Optimization, to improve logistics efficiencies in DOD’s distribution network and to reduce transportation costs. This initiative accomplishes these objectives by storing materiel at strategically located Defense Logistics Agency supply sites. Further, DOD has demonstrated significant progress in addressing its asset visibility weaknesses by taking steps to implement our February 2013 recommendation that DOD develop a strategy and execution plans that contain all the elements of a comprehensive strategic plan, including, among other elements, performance measures for gauging results. The National Defense Authorization Act for Fiscal Year 2014 required that DOD’s strategy and implementation plans for asset visibility, which were in development, incorporate, among other things, the missing elements that we identified. DOD’s January 2014 Strategy for Improving DOD Asset Visibility represents a corrective action plan and contains goals and objectives—as well as supporting execution plans—outlining specific objectives intended to improve asset visibility. DOD’s Strategy calls for organizations to identify at least one outcome or key performance indicator for assessing performance in implementing the initiatives intended to improve asset visibility. DOD has also established a structure, including its Asset Visibility Working Group, for monitoring implementation of its asset visibility improvement initiatives. Moving forward, the removal of DOD supply chain management from GAO’s High Risk List will require DOD to take several steps. For inventory management, DOD needs to demonstrate sustained progress by continuing to reduce its on-order and on-hand excess inventory, developing corrective actions to improve demand forecast accuracy, and implementing methodologies to set inventory levels for reparable items (i.e., items that can be repaired) with low or highly variable demand. For materiel distribution, DOD needs to develop a corrective action plan that includes reliable metrics for, among other things, identifying gaps and measuring distribution performance across the entire distribution pipeline. For asset visibility, DOD needs to (1) specify the linkage between the goals and objectives in its Strategy and the initiatives intended to implement it and (2) refine, as appropriate, its metrics to ensure they assess progress towards achievement of those goals and objectives. DOD weapon systems acquisition has also been on the High-Risk List since 1990. Congress and DOD have long sought to improve the acquisition of major weapon systems, yet many DOD programs are still falling short of cost, schedule, and performance expectations. The results are unanticipated cost overruns, reduced buying power, and in some cases delays or reductions in the capability ultimately delivered to the warfighter. Our past work and prior high-risk updates have identified multiple weaknesses in the way DOD acquires the weapon systems it delivers to the warfighter and we have made numerous recommendations on how to address these weaknesses. Recent actions taken by top leadership at DOD indicate a firm commitment to improving the acquisition of weapon systems as demonstrated by the release and implementation of the Under Secretary of Defense for Acquisition, Technology, and Logistics’ “Better Buying Power” initiatives. These initiatives include measures such as setting and enforcing affordability constraints, instituting a long-term investment plan for portfolios of weapon systems, implementing “should cost” management to control contract costs, eliminating redundancies within portfolios, and emphasizing the need to adequately grow and train the acquisition workforce. DOD also has made progress in its efforts to assess the root causes of poor weapon system acquisition outcomes and in monitoring the effectiveness of its actions to improve its management of weapon systems acquisition. Through changes to acquisition policies and procedures, DOD has made demonstrable progress and, if these reforms are fully implemented, acquisition outcomes should improve. At this point, there is a need to build on existing reforms by tackling the incentives that drive the process and behaviors. In addition, further progress must be made in applying best practices to the acquisition process, attracting and empowering acquisition personnel, reinforcing desirable principles at the beginning of the program, and improving the budget process to allow better alignment of programs and their risks and needs. While DOD has made real progress on the issues we have identified in this area, with the prospect of slowly growing or flat defense budgets for years to come, the department must continue this progress and get better returns on its weapon system investments than it has in the past. DOD has made some progress in updating its policies to enable better weapon systems outcomes. However, even with this call for change we remain concerned about the full implementation of proposed reforms as DOD has, in the past, failed to convert policy into practice. In addition, although we reported in March 2014 on the progress many DOD programs are making in reducing their cost in the near term, individual weapon programs are still failing to conform to best practices for acquisition or to implement key acquisition reforms and initiatives that could prevent long-term cost and schedule growth. We added this high-risk area in 1997 and expanded it this year to include protection of PII. Although significant challenges remain, the federal government has made progress toward improving the security of its cyber assets. For example, Congress, as part of its ongoing oversight, passed five bills, which became law, for improving the security of cyber assets. The first, The Federal Information Security Modernization Act of 2014, revises the Federal Information Security Management Act of 2002 and clarifies roles and responsibilities for overseeing and implementing federal agencies’ information security programs. The second law, the Cybersecurity Workforce Assessment Act, requires DHS to assess its cybersecurity workforce and develop a strategy for addressing workforce gaps. The third, the Homeland Security Cybersecurity Workforce Assessment Act, requires DHS to identify all of its cybersecurity positions and calls for the department to identify specialty areas of critical need in its cybersecurity workforce. The fourth, the National Cybersecurity Protection Act of 2014, codifies the role of DHS’ National Cybersecurity and Communications Integration Center as the nexus of cyber and communications integration for the federal government, intelligence community, and law enforcement. The fifth, the Cybersecurity Enhancement Act of 2014,through the National Institute of Standards and Technology, to facilitate and support the development of voluntary standards to reduce cyber risks to critical infrastructure. authorizes the Department of Commerce, The White House and senior leaders at DHS have also committed to securing critical cyber assets. Specifically, the President has signed legislation and issued strategy documents for improving aspects of cybersecurity, as well as an executive order and a policy directive for improving the security and resilience of critical cyber infrastructure. In addition, DHS and its senior leaders have committed time and resources to advancing cybersecurity efforts at federal agencies and to promoting critical infrastructure sectors’ use of a cybersecurity framework. However, securing cyber assets remains a challenge for federal agencies. Continuing challenges, such as shortages in qualified cybersecurity personnel and effective monitoring of, and continued weaknesses in, agencies’ information security programs need to be addressed. Until the White House and executive branch agencies implement the hundreds of recommendations that we and agency inspectors general have made to address cyber challenges, resolve identified deficiencies, and fully implement effective security programs and privacy practices, a broad array of federal assets and operations may remain at risk of fraud, misuse, and disruption, and the nation’s most critical federal and private sector infrastructure systems will remain at increased risk of attack from adversaries. In addition to the recently passed laws addressing cybersecurity and the protection of critical infrastructures, Congress should also consider amending applicable laws, such as the Privacy Act and E-Government Act, to more fully protect PII collected, used, and maintained by the federal government. The Department of the Interior’s (Interior) continued efforts to improve its management of federal oil and gas resources since we placed these issues on the High Risk List in 2011 have resulted in the department meeting one of the criteria for removal from our High Risk List— leadership commitment. Interior has implemented a number of strategies and corrective measures to help ensure the department collects its share of revenue from oil and gas produced on federal lands and waters. Additionally, Interior is developing a comprehensive approach to address its ongoing human capital challenges. In November 2014, Interior senior leaders briefed us on the department’s commitment to address the high- risk issue area by describing the following corrective actions. To help ensure Interior collects revenues from oil and gas produced on federal lands and waters, Interior has taken steps to strengthen its efforts to improve the measurement of oil and gas produced on federal leases by ensuring a link between what happens in the field (measurement and operations) and what is reported to Interior’s Office of Natural Resources Revenue or ONRR (production volumes and dispositions). To ensure that federal oil and gas leases are inspected, Interior is hiring inspectors and engineers with an understanding of metering equipment and measurement accuracy. The department has several efforts under way to assure that oil and gas are accurately measured and reported. For example, ONRR contracted for a study to automate data collection from production metering systems. In 2012, the Bureau of Safety and Environmental Enforcement hired and provided measurement training to a new measurement inspection team. To better ensure a fair return to the federal government from leasing and production activities from federal offshore leases, Interior raised royalty rates, minimum bids, and rental rates. For onshore federal leases, according to Interior’s November 2014 briefing document, ONRR’s Economic Analysis Office will provide the Bureau of Land Management (BLM) monthly analyses of global and domestic market conditions as BLM initiates a rulemaking effort to provide greater flexibility in setting onshore royalty rates. To address the department’s ongoing human capital challenges, Interior is working with the Office of Personnel Management to establish permanent special pay rates for critical energy occupations in key regions, such as the Gulf of Mexico. Bureau managers are being trained on the use of recruitment, relocation, and retention incentives to improve hiring and retention. Bureaus are implementing or have implemented data systems to support the accurate capture of hiring data to address delays in the hiring process. Finally, Interior is developing strategic workforce plans to assess the critical skills and competencies needed to achieve current and future program goals. To address its revenue collection challenges, Interior will need to identify the staffing resources necessary to consistently meet its annual goals for oil and gas production verification inspections. Interior needs to continue meeting its time frames for updating regulations related to oil and gas measurement and onshore royalty rates. It will also need to provide reasonable assurance that oil and gas produced from federal leases is accurately measured and that the federal government is getting an appropriate share of oil and gas revenues. To address its human capital challenges, Interior needs to consider how it will address staffing shortfalls over time in view of continuing hiring and retention challenges. It will also need to implement its plans to hire additional staff with expertise in inspections and engineering. Interior needs to ensure that it collects and maintains complete and accurate data on hiring times—such as the time required to prepare a job description, announce the vacancy, create a list of qualified candidates, conduct interviews, and perform background and security checks—to effectively implement changes to expedite its hiring process. The Centers for Medicare & Medicaid Services (CMS), in the Department of Health and Human Services (HHS), administers Medicare, which has been on the High Risk List since 1990. CMS has continued to focus on reducing improper payments in the Medicare program, which has resulted in the agency meeting our leadership commitment criterion for removal from the High Risk List and partially meeting our other four criteria. HHS has demonstrated top leadership support for addressing this risk area by continuing to designate “strengthened program integrity through improper payment reduction and fighting fraud” an HHS strategic priority and, through its dedicated Center for Program Integrity, CMS has taken multiple actions to improve in this area. For example, as we recommended in November 2012, CMS centralized the development and implementation of automated edits—prepayment controls used to deny Medicare claims that should not be paid—based on a type of national policy called national coverage determinations. Such action will ensure greater consistency in paying only those Medicare claims that are consistent with national policies. In addition, CMS has taken action to implement provisions of the Patient Protection and Affordable Care Act that Congress enacted to combat fraud, waste, and abuse in Medicare. For instance, in March 2014, CMS awarded a contract to a Federal Bureau of Investigation-approved contractor that will enable the agency to conduct fingerprint-based criminal history checks of high-risk providers and suppliers. This and other provider screening procedures will help block the enrollment of entities intent on committing fraud. CMS made positive strides, but more needs to be done to fully meet our criteria. For example, CMS has demonstrated leadership commitment by taking actions such as strengthening provider and supplier enrollment provisions, and improving its prepayment and postpayment claims review However, all parts of the process in the fee-for-service (FFS) program.Medicare program are on the Office of Management and Budget’s list of high-error programs, suggesting additional actions are needed. By implementing our open recommendations, CMS may be able to reduce improper payments and make progress toward fulfilling the four outstanding criteria to remove Medicare improper payments from our High Risk List. The following summarizes open recommendations and procedures authorized by the Patient Protection and Affordable Care Act that CMS should implement to make progress toward fulfilling the four outstanding criteria to remove Medicare improper payments from our High Risk List. CMS should require a surety bond for certain types of at-risk providers and suppliers; publish a proposed rule for increased disclosures of prior actions taken against providers and suppliers enrolling or revalidating enrollment in Medicare, such as whether the provider or supplier has been subject to a payment suspension from a federal health care program; establish core elements of compliance programs for providers and improve automated edits that identify services billed in medically unlikely amounts; develop performance measures for the Zone Program Integrity Contractors who explicitly link their work to the agency’s Medicare FFS program integrity performance measures and improper payment reduction goals; reduce differences between contractor postpayment review requirements, when possible; monitor the database used to track Recovery Auditors’ activities to ensure that all postpayment review contractors are submitting required data and that the data the database contains are accurate and complete; require Medicare administrative contractors to share information about the underlying policies and savings related to their most effective edits; and efficiently and cost-effectively identify, design, develop, and implement an information technology solution that addresses the removal of Social Security numbers from Medicare beneficiaries’ health insurance cards. The National Oceanic and Atmospheric Administration (NOAA) has made progress toward improving its ability to mitigate gaps in weather satellite data since the issue was placed on the High Risk List in 2013. NOAA has demonstrated leadership on both its polar-orbiting and geostationary satellite programs by making decisions on how it plans to mitigate anticipated and potential gaps, and in making progress on multiple mitigation-related activities. In addition, the agency implemented our recommendations to improve its polar-orbiting and geostationary satellite gap contingency plans. Specifically, in September 2013, we recommended that NOAA establish a comprehensive contingency plan for potential polar satellite data gaps that was consistent with contingency planning best practices. In February 2014, NOAA issued an updated plan that addressed many, but not all, of the best practices. For example, the updated plan includes additional contingency alternatives; accounts for additional gap scenarios; identifies mitigation strategies to be executed; and identifies specific activities for implementing those strategies along with associated roles and responsibilities, triggers, and deadlines. In addition, in September 2013, we reported that while NOAA had established contingency plans for the loss of geostationary satellites, these plans did not address user concerns over potential reductions in capability and did not identify alternative solutions and timelines for preventing a delay in the Geostationary Operational Environmental Satellite-R (GOES-R) launch date. We recommended the agency revise its contingency plans to address these weaknesses. In February 2014, NOAA released a new satellite contingency plan that improved in many, but not all, of the best practices. For example, the updated plan clarified requirements for notifying users regarding outages and impacts and provided detailed information on responsibilities for each action in the plan. NOAA has demonstrated leadership commitment in addressing data gaps of its polar-orbiting and geostationary weather satellites by making decisions about how to mitigate potential gaps and by making progress in implementing multiple mitigation activities. However, capacity concerns— including computing resources needed for some polar satellite mitigation activities and the limited time available for integration and testing prior to the scheduled launch of the next geostationary satellite—continue to present challenges. In addition, while both programs have updated their satellite contingency plans, work remains to implement and oversee efforts to ensure that mitigation plans will be viable if and when they are needed. Overall, the government continues to take high-risk problems seriously and is making long-needed progress toward correcting them. Congress has acted to address several individual high-risk areas through hearings and legislation. Our high-risk update and high-risk website, http://www.gao.gov/highrisk/ can help inform the oversight agenda for the 114th Congress and guide efforts of the administration and agencies to improve government performance and reduce waste and risks. In support of Congress and to further progress to address high-risk issues, we continue to review efforts and make recommendations to address high- risk areas. Continued perseverance in addressing high-risk areas will ultimately yield significant benefits. Thank you, Chairman Chaffetz, Ranking Member Cummings, and Members of the Committee. This concludes my testimony. I would be pleased to answer any questions. For further information on this testimony, please contact J. Christopher Mihm at (202) 512-6806 or mihmj@gao.gov. Contact points for the individual high-risk areas are listed in the report and on our high-risk web site. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government is one of the world's largest and most complex entities; about $3.5 trillion in outlays in fiscal year 2014 funded a broad array of programs and operations. GAO maintains a program to focus attention on government operations that it identifies as high risk due to their greater vulnerabilities to fraud, waste, abuse, and mismanagement or the need for transformation to address economy, efficiency, or effectiveness challenges. Since 1990, more than one-third of the areas previously designated as high risk have been removed from the list because sufficient progress was made in addressing the problems identified. The five criteria for removal are: (1) leadership commitment, (2) agency capacity, (3) an action plan, (4) monitoring efforts, and (5) demonstrated progress. This biennial update describes the status of high-risk areas listed in 2013 and identifies new high-risk areas needing attention by Congress and the executive branch. Solutions to high-risk problems offer the potential to save billions of dollars, improve service to the public, and strengthen government performance and accountability. Solid, steady progress has been made in the vast majority of the high-risk areas. Eighteen of the 30 areas on the 2013 list at least partially met all of the criteria for removal from the high risk list. Of those, 11 met at least one of the criteria for removal and partially met all others. Sufficient progress was made to narrow the scope of two high-risk issues— Protecting Public Health through Enhanced Oversight of Medical Products and DOD Contract Management. Overall, progress has been possible through the concerted actions of Congress, leadership and staff in agencies, and the Office of Management and Budget. This year GAO is adding 2 areas, bringing the total to 32. Managing Risks and Improving Veterans Affairs (VA) Health Care. GAO has reported since 2000 about VA facilities' failure to provide timely health care. In some cases, these delays or VA's failure to provide care at all have reportedly harmed veterans. Although VA has taken actions to address some GAO recommendations, more than 100 of GAO's recommendations have not been fully addressed, including recommendations related to the following areas: (1) ambiguous policies and inconsistent processes, (2) inadequate oversight and accountability, (3) information technology challenges, (4) inadequate training for VA staff, and (5) unclear resource needs and allocation priorities. The recently enacted Veterans Access, Choice, and Accountability Act included provisions to help VA address systemic weaknesses. VA must effectively implement the act. Improving the Management of Information Technology (IT) Acquisitions and Operations. Congress has passed legislation and the administration has undertaken numerous initiatives to better manage IT investments. Nonetheless, federal IT investments too frequently fail to be completed or incur cost overruns and schedule slippages while contributing little to mission-related outcomes. GAO has found that the federal government spent billions of dollars on failed and poorly performing IT investments which often suffered from ineffective management, such as project planning, requirements definition, and program oversight and governance. Over the past 5 years, GAO made more than 730 recommendations; however, only about 23 percent had been fully implemented as of January 2015. GAO is also expanding two areas due to evolving high-risk issues. Enforcement of Tax Laws. This area is expanded to include IRS's efforts to address tax refund fraud due to identify theft. IRS estimates it paid out $5.8 billion (the exact number is uncertain) in fraudulent refunds in tax year 2013 due to identity theft. This occurs when a thief files a fraudulent return using a legitimate taxpayer's identifying information and claims a refund. Ensuring the Security of Federal Information Systems and Cyber Critical Infrastructure and Protecting the Privacy of Personally Identifiable Information (PII). This risk area is expanded because of the challenges to ensuring the privacy of personally identifiable information posed by advances in technology. These advances have allowed both government and private sector entities to collect and process extensive amounts of PII more effectively. The number of reported security incidents involving PII at federal agencies has increased dramatically in recent years. This report contains GAO's views on progress made and what remains to be done to bring about lasting solutions for each high-risk area. Perseverance by the executive branch in implementing GAO's recommended solutions and continued oversight and action by Congress are essential to achieving greater progress.
Many firms of varying sizes make up the U.S. petroleum industry. While some firms engage in only limited activities within the industry, such as exploration for and production of crude oil and natural gas or refining crude oil and marketing petroleum products, fully vertically integrated oil companies participate in all aspects of the industry. Before the 1970s, major oil companies that were fully vertically integrated controlled the global network for supplying, pricing, and marketing crude oil. However, the structure of the world crude oil market has dramatically changed as a result of such factors as the nationalization of oil fields by oil-producing countries, the emergence of independent oil companies, and the evolution of futures and spot markets in the 1970s and 1980s. Since U.S. oil prices were deregulated in 1981, the price paid for crude oil in the United States has been largely determined in the world oil market, which is mostly influenced by global factors, especially supply decisions of the Organization of Petroleum Exporting Countries (OPEC) and world economic and political conditions. The United States currently imports over 60 percent of its crude oil supply. In contrast, the bulk of the gasoline used in the United States is produced domestically. In 2001, for example, gasoline refined in the United States accounted for over 90 percent of the total domestic gasoline consumption. Companies that supply gasoline to U.S. markets also post the domestic gasoline prices. Historically, the domestic petroleum market has been divided into five regions: the East Coast region, the Midwest region, the Gulf Coast region, the Rocky Mountain region, and the West Coast region. Proposed mergers in all industries, including the petroleum industry, are generally reviewed by federal antitrust authorities—including FTC and the Department of Justice (DOJ)—to assess the potential impact on market competition. According to FTC officials, FTC generally reviews proposed mergers involving the petroleum industry because of the agency’s expertise in that industry. FTC analyzes these mergers to determine if they would likely diminish competition in the relevant markets and result in harm, such as increased prices. To determine the potential effect of a merger on market competition, FTC evaluates how the merger would change the level of market concentration, among other things. Conceptually, the higher the concentration, the less competitive the market is and the more likely that firms can exert control over prices. The ability to maintain prices above competitive levels for a significant period of time is known as market power. According to the merger guidelines jointly issued by DOJ and FTC, market concentration as measured by HHI is ranked into three separate categories: a market with an HHI under 1,000 is considered to be unconcentrated; if HHI is between 1,000 and 1,800 the market is considered moderately concentrated; and if HHI is above 1,800, the market is considered highly concentrated. While concentration is an important aspect of market structure—the underlying economic and technical characteristics of an industry—other aspects of market structure that may be affected by mergers also play an important role in determining the level of competition in a market. These aspects include barriers to entry, which are market conditions that provide established sellers an advantage over potential new entrants in an industry, and vertical integration. Over 2,600 merger transactions occurred from 1991 through 2000 involving all three segments of the U.S. petroleum industry. Almost 85 percent of the mergers occurred in the upstream segment (exploration and production), while the downstream segment (refining and marketing of petroleum) accounted for about 13 percent, and the midstream segment (transportation) accounted for over 2 percent. The vast majority of the mergers—about 80 percent—involved one company’s purchase of a segment or asset of another company, while about 20 percent involved the acquisition of a company’s total assets by another so that the two became one company. Most of the mergers occurred in the second half of the decade, including those involving large partially or fully vertically integrated companies. Petroleum industry officials and experts we contacted cited several reasons for the industry’s wave of mergers in the 1990s, including achieving synergies, increasing growth and diversifying assets, and reducing costs. Economic literature indicates that enhancing market power is also sometimes a motive for mergers. Ultimately, these reasons mostly relate to companies’ desire to maximize profit or stock values. Mergers in the 1990s contributed to increases in market concentration in the downstream segment of the U.S. petroleum industry, while the upstream segment experienced little change overall. We found that market concentration, as measured by the HHI, decreased slightly in the upstream segment, based on crude oil production activities at the national level, from 290 in 1990 to 217 in 2000. Moreover, based on benchmarks established jointly by DOJ and FTC, the upstream segment of the U.S. petroleum industry remained unconcentrated at the end of the 1990s. The increases in market concentration in the downstream segment varied by activity and region. For example, the HHI of the refining market in the East Coast region increased from a moderately concentrated level of 1136 in 1990 to a highly concentrated level of 1819 in 2000. In the Rocky Mountain and the West Coast regions, it increased from 1029 to 1124 and from 937 to 1267, respectively, in that same period. Thus, while each of these refining markets increased in concentration, the Rocky Mountain remained within the moderately concentrated range but the West Coast changed from unconcentrated in 1990 to moderately concentrated in 2000. The HHI of refining markets also increased from 699 to 980 in the Midwest and from 534 to 704 in the Gulf Coast during the same period, although these markets remained unconcentrated. In wholesale gasoline markets, market concentration increased broadly throughout the United States between 1994 and 2002. Specifically, we found that 46 states and the District of Columbia had moderately or highly concentrated markets by 2002, compared to 27 in 1994. In both the refining and wholesale markets of the downstream segment, merger activity and market concentration were highly correlated for most regions of the country. Evidence from various sources indicates that, in addition to increasing market concentration, mergers also contributed to changes in other aspects of market structure in the U.S. petroleum industry that affect competition—specifically, vertical integration and barriers to entry. However, we could not quantify the extent of these changes because of a lack of relevant data. Vertical integration can conceptually have both pro- and anticompetitive effects. Based on anecdotal evidence and economic analyses by some industry experts, we determined that a number of mergers that have occurred since the 1990s have led to greater vertical integration in the U.S. petroleum industry, especially in the refining and marketing segment. For example, we identified eight mergers that occurred between 1995 and 2001 that might have enhanced the degree of vertical integration, particularly in the downstream segment. Concerning barriers to entry, our interviews with petroleum industry officials and experts provide evidence that mergers had some impact on the U.S. petroleum industry. Barriers to entry could have implications for market competition because companies that operate in concentrated industries with high barriers to entry are more likely to possess market power. Industry officials pointed out that large capital requirements and environmental regulations constitute barriers for potential new entrants into the U.S. refining business. For example, the officials indicated that a typical refinery could cost billions of dollars to build and that it may be difficult to obtain the necessary permits from the relevant state or local authorities. At the wholesale and retail marketing levels, industry officials pointed out that mergers might have exacerbated barriers to entry in some markets. For example, the officials noted that mergers have contributed to a situation where pipelines and terminals are owned by fewer, mostly integrated companies that sometimes deny access to third-party users, especially when supply is tight—which creates a disincentive for potential new entrants into such wholesale markets. According to some petroleum industry officials that we interviewed, gasoline marketing in the United States has changed in two major ways since the 1990s. First, the availability of unbranded gasoline has decreased, partly due to mergers. Officials noted that unbranded gasoline is generally priced lower than branded. They generally attributed the decreased availability of unbranded gasoline to one or more of the following factors: There are now fewer independent refiners, who typically supply mostly unbranded gasoline. These refiners have been acquired by branded companies, have grown large enough to be considered a brand, or have simply closed down. Partially or fully vertically integrated oil companies have sold or mothballed some refineries. As a result, some of these companies now have only enough refinery capacity to supply their own branded needs, with little or no excess to sell as unbranded. Major branded refiners are managing their inventory more efficiently, ensuring that they produce only enough gasoline to meet their current branded needs. We could not quantify the extent of the decrease in the unbranded gasoline supply because the data required for such analyses do not exist. The second change identified by these officials is that refiners now prefer dealing with large distributors and retailers because they present a lower credit risk and because it is more efficient to sell a larger volume through fewer entities. Refiners manifest this preference by setting minimum volume requirements for gasoline purchases. These requirements have motivated further consolidation in the distributor and retail sectors, including the rise of hypermarkets. Our econometric modeling shows that the mergers we examined mostly led to higher wholesale gasoline prices in the second half of the 1990s. The majority of the eight specific mergers we examined—Ultramar Diamond Shamrock (UDS)-Total, Tosco-Unocal, Marathon-Ashland, Shell-Texaco I (Equilon), Shell-Texaco II (Motiva), BP-Amoco, Exxon-Mobil, and Marathon Ashland Petroleum (MAP)-UDS—resulted in higher prices of wholesale gasoline in the cities where the merging companies supplied gasoline before they merged. The effects of some of the mergers were inconclusive, especially for boutique fuels sold in the East Coast and Gulf Coast regions and in California. For the seven mergers that we modeled for conventional gasoline, five led to increased prices, especially the MAP-UDS and Exxon-Mobil mergers, where the increases generally exceeded 2 cents per gallon, on average. For the four mergers that we modeled for reformulated gasoline, two— Exxon-Mobil and Marathon-Ashland—led to increased prices of about 1 cent per gallon, on average. In contrast, the Shell-Texaco II (Motiva) merger led to price decreases of less than one-half cent per gallon, on average, for branded gasoline only. For the two mergers—Tosco-Unocal and Shell-Texaco I (Equilon)—that we modeled for gasoline used in California, known as California Air Resources Board (CARB) gasoline, only the Tosco-Unocal merger led to price increases. The increases were for branded gasoline only and exceeded 6 cents per gallon, on average. For market concentration, which captures the cumulative effects of mergers as well as other competitive factors, our econometric analysis shows that increased market concentration resulted in higher wholesale gasoline prices. Prices for conventional (non-boutique) gasoline, the dominant type of gasoline sold nationwide from 1994 through 2000, increased by less than one-half cent per gallon, on average, for branded and unbranded gasoline. The increases were larger in the West than in the East—the increases were between one-half cent and one cent per gallon in the West, and about one- quarter cent in the East (for branded gasoline only), on average. Price increases for boutique fuels sold in some parts of the East Coast and Gulf Coast regions and in California were larger compared to the increases for conventional gasoline. The wholesale prices increased by an average of about 1 cent per gallon for boutique fuel sold in the East Coast and Gulf Coast regions between 1995 and 2000, and by an average of over 7 cents per gallon in California between 1996 and 2000. Our analysis shows that wholesale gasoline prices were also affected by other factors included in the econometric models—particularly, gasoline inventories relative to demand, refinery capacity utilization rates, and the supply disruptions that occurred in some parts of the Midwest and the West Coast. In particular, wholesale gasoline prices were about 1 cent per gallon higher, on average, when gasoline inventories were low relative to demand, typically in the summer driving months. Also, prices were higher by about an average of one-tenth to two-tenths of 1 cent per gallon when refinery capacity utilization rates increased by 1 percent. The prices of conventional gasoline were about 4 to 5 cents per gallon higher, on average, during the Midwest and West Coast supply disruptions. The increase in prices for CARB gasoline was about 4 to 7 cents per gallon, on average, during the West Coast supply disruptions. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or other Members of the Subcommittee may have. For further information about this testimony, please contact me at (202) 512-3841. Key contributors to this testimony included Godwin Agbara, Scott Farrow, John A. Karikari, and Cynthia Norris. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Gasoline is subject to dramatic price swings. A multitude of factors cause volatility in U.S. gasoline markets, including world crude oil costs, limited refining capacity, and low inventories relative to demand. Since the 1990s, another factor affecting U.S. gasoline markets has been a wave of mergers in the petroleum industry, several of them between large oil companies that had previously competed with each other. For example, in 1999, Exxon, the largest U.S. oil company, merged with Mobil, the second largest. This testimony is based primarily on Energy Markets: Effects of Mergers and Market Concentration in the U.S. Petroleum Industry ( GAO-04-96 , May 17, 2004). This report examined mergers in the U.S. petroleum industry from the 1990s through 2000, the changes in market concentration (the distribution of market shares among competing firms) and other factors affecting competition in the U.S. petroleum industry, how U.S. gasoline marketing has changed since the 1990s, and how mergers and market concentration in the U.S. petroleum industry have affected U.S. gasoline prices at the wholesale level. To address these issues, GAO purchased and analyzed a large body of data and developed state-of-the art econometric models for isolating the effects of eight specific mergers and increased market concentration on wholesale gasoline prices. Experts peer-reviewed GAO's analysis. One of the many factors that can impact gasoline prices is mergers within the U.S. petroleum industry. Over 2,600 such mergers have occurred since the 1990s. The majority occurred later in the period, most frequently among firms involved in exploration and production. Industry officials cited various reasons for the mergers, particularly the need for increased efficiency and cost savings. Economic literature also suggests that firms sometimes merge to enhance their ability to control prices. Partly because of the mergers, market concentration has increased in the industry, mostly in the downstream (refining and marketing) segment. For example, market concentration in refining increased from moderately to highly concentrated on the East Coast and from unconcentrated to moderately concentrated on the West Coast. Concentration in the wholesale gasoline market increased substantially from the mid-1990s so that by 2002, most states had either moderately or highly concentrated wholesale gasoline markets. On the other hand, market concentration in the upstream (exploration and production) segment remained unconcentrated by the end of the 1990s. Anecdotal evidence suggests that mergers also have changed other factors affecting competition, such as firms' ability to enter the market. Two major changes have occurred in U.S. gasoline marketing related to mergers, according to industry officials. First, the availability of generic gasoline, which is generally priced lower than branded gasoline, has decreased substantially. Second, refiners now prefer to deal with large distributors and retailers, which has motivated further consolidation in distributor and retail markets. Based on data from the mid-1990s through 2000, GAO's econometric analyses indicate that mergers and increased market concentration generally led to higher wholesale gasoline prices in the United States. Six of the eight mergers GAO modeled led to price increases, averaging about 2 cents per gallon. Increased market concentration, which reflects the cumulative effects of mergers and other competitive factors, also led to increased prices in most cases. For conventional gasoline, the predominant type used in the country, the change in wholesale price due to increased market concentration ranged from a decrease of about 1 cent per gallon to an increase of about 5 cents per gallon. For boutique fuels sold in the East Coast and Gulf Coast regions, wholesale prices increased by about 1 cent per gallon, while prices for boutique fuels sold in California increased by over 7 cents per gallon. GAO also identified price increases of one-tenth of a cent to 7 cents that were caused by other factors included in the models--particularly low gasoline inventories relative to demand, high refinery capacity utilization rates, and supply disruptions in some regions. FTC disagreed with GAO's methodology and findings. However, GAO believes its analyses are sound.
The Comanche program was established in 1983 to replace the Army’s light helicopter fleet. The contractor team of Sikorsky Aircraft Corporation and Boeing Helicopter Company were expected to design a low-cost, lightweight, advanced technology helicopter capable of performing the primary missions of armed reconnaissance and attack. Critical to achieving these capabilities are the successful development of advanced technologies, including composite materials, advanced avionics and propulsion systems, and sophisticated software and hardware. The Army must meet ambitious maintainability goals in order to (1) realize significantly lower operating and support costs predicted for this program and (2) achieve a wartime operational availability for the Comanche of 6 hours per day. In December 1994, the Secretary of Defense directed the Army to restructure the Comanche helicopter program as part of efforts to meet budgetary constraints. The Secretary’s restructure decision reduced funding for the program from $4.2 billion to $2.2 billion for fiscal years 1996 through 2001. In addition to extending the development phase by 3 years, it also called for two flyable prototypes to be produced and the Comanche production decision to be deferred. In response to the Secretary’s decision, the Army proposed a program restructure that would allow it to acquire, within the Secretary’s funding constraint, six aircraft in addition to the two prototypes by deferring developmental efforts to fiscal year 2002 and beyond. DOD approved the proposal in March 1995. The Army’s restructuring of the Comanche program continues risks (1) associated with making production decisions before knowing whether the aircraft will be able to perform as required and (2) of higher program costs. According to DOD’s April 1990 guidelines for determining degrees of concurrency, a program with high concurrency typically proceeds into low-rate initial production before significant initial operational test and evaluation is completed. Regarding the need to keep concurrency low, the guidelines note that establishing programs with no concurrency, or a low degree of concurrency, avoids the risks that (1) production items have to be retrofitted to make them work properly and (2) system design will not be thoroughly tested. As we recently reported, aircraft systems, including the T-45A and C-17, that entered low-rate initial production before successfully completing initial operational testing and evaluation experienced significant and sometimes costly modifications to achieve satisfactory performance. Under the Army’s restructured program, operational testing will not begin until after the low-rate initial production decision is made, continuing the risks associated with the highly concurrent Comanche program. In responding to the Secretary’s restructure decision, the Army proposed, and was subsequently granted approval, to buy six “early operational capability” aircraft, in addition to the two prototypes that were to be acquired under the Secretary’s decision. According to program officials, these aircraft are estimated to cost in excess of $300 million. The Army does not consider these aircraft as either prototype or low-rate initial production aircraft; however, program officials believe that when these aircraft are fielded, the Army will be able to better evaluate the Comanche’s mission capability. The Army intends to fund these aircraft by deferring additional developmental efforts to fiscal years 2002 and beyond. Under the Army’s restructured program, operational testing will not begin until well after funds are committed to buy production aircraft. Armed reconnaissance and attack mission equipment packages are to be integrated into the six early operational aircraft by fiscal year 2004. The Army plans to use these aircraft to start operational testing by about August 2005. However, long-lead production decisions are scheduled for November 2003, and low-rate initial production is planned to start in November 2004, about 9 months before operational testing begins. According to DOD’s guidelines, the amount of risk associated with concurrency can be limited by reducing production aircraft to the minimum necessary to perform initial operational testing. The Army maintains that under the stretched out program it can conduct initial operational testing with the six early operational aircraft. Because the restructure has provided the additional time and aircraft, the Army has an opportunity to significantly reduce or eliminate program concurrency and its associated risks by completing operational testing before committing funds to any production decisions. The Comanche was originally justified to the Congress as a relatively inexpensive aircraft. However, since 1985, the program has experienced significant increases in program acquisition unit cost. Funding reductions have caused the program to undergo significant restructuring, resulting in sharp decreases in planned acquisition quantities and lengthening of development schedules, thereby increasing Comanche program costs. In 1985, the Comanche had estimated total program acquisition costs of about $61 billion for 5,023 aircraft (or $12.1 million per aircraft). In 1992, we reported that (1) as of October 1991, the program acquisition unit cost had increased to $27.4 million, (2) acquisition quantities had been reduced to 1,292 aircraft, and (3) future increases in cost per aircraft were likely.As of February 1995, the Comanche’s estimated program acquisition unit cost was $34.4 million per aircraft, a 185-percent increase from the 1985 estimate. The estimated total program acquisition cost for the planned acquisition of 1,292 aircraft is now more than $44 billion. Both the Secretary’s decision and the Army’s restructure would extend the development program by about 3 years and, under either, increase the risk of higher total program cost and cost per aircraft. However, in reviewing the Army’s restructure proposal, DOD noted some concern over Comanche program costs for fiscal year 2002 and beyond and the large increase in investment programs projected to occur about that time. We are also concerned that the Army’s plan to defer additional developmental efforts to fiscal year 2002 and beyond may increase the risk that needed funds may not be available to perform the deferred developmental effort. The Comanche program’s uncertainties in software development and aircraft maintainability increase the risk that the aircraft will not perform successfully. We believe the restructuring provides additional time to resolve these issues before the decision to enter production is made. The Comanche will be the most computerized, software-intensive Army helicopter ever built. The Army estimates that about 1.4 million lines of code are required to perform and integrate mission critical functions. With additional ground support and training software to be developed, the total program will have more than 2.7 million lines of code. This compares to about 573,000 lines of code for the upgraded Apache attack helicopter with fire control radar. The Army estimates 95 percent of the Comanche’s total software will be written in Ada, a DOD-developed programming language. The Army plans to demonstrate initial software performance with the mission equipment package, which includes the flight control system, during first flight of the Comanche, scheduled for November 1995. The development and integration of on-board, embedded computer systems is a significant program objective. The Comanche’s performance and capability depend heavily on these systems and efforts have been ongoing to solve the problems associated with these systems. Nevertheless, (1) software development problems still exist with the Ada compilation system, (2) delays in software development and testing are occurring, and (3) improvements are needed in configuration management. If these issues are not resolved, the aircraft’s performance and capability will be degraded and first flight could be delayed. Almost all of the Comanche software will be developed in the Ada programming language; however, software developers are not using the same version of the Ada compilation system. The Ada compilation system translates Ada code into machine language so that software can be used by the Comanche’s computers. For example, it is being used to help develop software for use on the mission equipment package that is critical for first flight. Subcontractors and the contractor team should be using the same, qualified version of this compilation system to ensure effective software integration. However, fixes to individual compiler software problems are not being shared with all developers; therefore, they are not using a common compilation system. These problems have already delayed qualification testing of the compilation system by 1 year. The lack of a uniform, qualified compilation system among software developers could put first flight at risk, according to the Defense Plant Representative Office. Problems with software integration may show up once integration testing begins in the June to November 1995 time frame. If that occurs, there may not be time to fix problems prior to scheduled first flight. The program is experiencing high turnover of software engineers at one of the contractor team’s facilities. In its December 1994, monthly assessment report, the Defense Plant Representative Office, which is responsible for contract oversight, observed that high turnover of software personnel was putting scheduled first flight at risk. Loss of key personnel has already contributed to schedule slippage in several critical software development areas. Software development for the following areas has been affected: the airborne engine monitoring system, aircraft systems management, control database, and crewstation interface management. The contractor team has formulated a “get well” plan that is dependent on being able to hire additional personnel in these areas. However, hiring additional qualified personnel is difficult, according to the Defense Plant Representative Office, because employment would be short term. The flight control system software verification testing is also being delayed. As of February 8, 1995, Boeing had conducted only 163 of approximately 500 tests originally planned to be completed by that date. The subcontractor responsible for developing this software has been late delivering software for testing and has provided faulty software to Boeing, according to the Defense Plant Representative Office. Boeing established a recovery plan for this area that would have resulted in a completion date in March 1995—about a 1-month delay from the original plan. However, in February 1995, the contractor revised the recovery plan to reflect a completion date of July 1995—a 5-month delay. The flight control system is critical to first flight, according to the Defense Plant Representative Office. However, because of delays with verification testing, the Defense Plant Representative Office is concerned that the remaining verification testing, as well as, the validation and formal qualification testing will not be completed in a timely manner. As a result, first flight may be delayed. Boeing is scheduled to complete these tests prior to first flight. According to the program office, Boeing’s plan to complete the testing calls for it to be conducted concurrently. If major problems occur in any one of the testing phases, there may not be enough time to fix the problem and complete all testing before first flight. Configuration management is the discipline of applying technical and administrative direction and surveillance to (a) control the flow of information between organizations and activities within a project; (b) manage the ownership of, and changes to, controlled information; (c) ensure information consistency; and (d) enable product release, acceptance, and maintenance. The part of configuration management used to report software problems and changes among the contractor team and subcontractors has shortcomings that put software development at risk. In its November 1994 monthly assessment report, the Defense Plant Representative Office observed that the lack of a common problem reporting system made proper handling of software related changes difficult. Furthermore, the report noted that this situation could adversely impact scheduled first flight of the Comanche. As of February 1995, the contractor team still did not have a common, automated database available to track problem change reports. Thus, the contractor team, as well as subcontractors, did not have visibility over changes made to software. Maintainability requirements are important to achieving lower operating and support costs and wartime availability goals. However, these goals are at risk because key maintainability requirements such as direct maintenance man-hours per flight hour (MMH/FH), the mean time to repair (MTTR), and fault isolation may not be achievable. Individually, failure to meet these parameters may not be a significant problem; however, collectively they affect the ability of the Comanche to achieve lower operating and support cost and wartime availability objectives. In March 1987, the Army established a 2.6 direct MMH/FH requirement for the Comanche. It represents the corrective and preventive maintenance per flight hour expected to be performed at the unit level. The Army formulated its planned wartime operating tempo for a Comanche battalion based on 6 hours a day per aircraft, or 2,200 flying hours per year. It then determined the maintenance factor needed to support this operating tempo—2.6 MMH/FH. As the MMH/FH level increases, the number of maintainers needed to sustain the 2,200 wartime flying hour goal increases, as do operating and support costs. Conversely, if the Army could not increase the number of maintainers, the planned operating tempo would have to be reduced. The reasonableness of the Comanche’s 2.6 direct MMH/FH requirement has been debated for several years within the Army and DOD. Representatives from the program office; the Army Materiel Systems Analysis Activity, which independently evaluates program testing results; the Office of the Assistant Secretary of the Army for Research, Development, and Acquisition; and the Army Cost and Economic Analysis Center met on October 28, 1994, to discuss the direct MMH/FH goal for the Comanche program. They agreed that the 2.6-MMH/FH requirement was not a realistic, achievable goal. Consequently, Army officials reached consensus and agreed on 3.2 direct MMH/FH as the Army-wide position for this parameter. However, during these discussions, Army Materiel Systems Analysis Activity personnel noted that attaining a 3.2-MMH/FH goal represented a medium to high risk, while a 4.3-MMH/FH goal had a low to medium risk. Increasing the maintenance factor increased the number of maintainers needed and will increase estimated operating and support costs by about $800 million over a 20-year period. The direct MMH/FH requirement does not represent the total maintenance burden for the Comanche because it does not include indirect maintenance time. The Army does not normally collect data on indirect maintenance time. According to the program office, its best estimate of indirect maintenance time, following Army guidance, is 2.5 MMH/FH, and this figure has been used for calculating manpower needs for crew chief personnel on the Comanche. Thus, the total maintenance burden assumed for the Comanche is currently 5.7 MMH/FH (3.2 direct MMH/FH plus 2.5 indirect MMH/FH). To minimize turnaround time for repairs at the unit and depot, the Army established MTTR requirements of 52 minutes for repairs at the unit level and up to 12 hours at the depot level for the Comanche. These requirements represents the average time expected to diagnose a fault, remove and repair an item, and perform an operational check and/or test flight. We determined that any increase in MTTR above 1 hour will begin to impact the Army’s wartime availability goal of 2,200 hours per year, unless additional maintenance personnel are available. As of January 1995, the contractor team was estimating that the Army would achieve 59 minutes for unit level repairs. According to contractor team officials, the requirement was not being met because the cure time required for composite material used on the aircraft was greater than expected. The contractor team discussed changing the MTTR requirement to 1 hour; however, the program office believes the problem could be resolved and did not believe the specification should be changed. The contractor team has not yet developed MTTR estimates for depot-level repair. The Comanche’s diagnostic system is required to correctly isolate failed mechanical and electrical components at least 80 percent of the time with a high degree of accuracy. A high level of accuracy is essential as it allows maintainers to isolate and fix problems at the unit level. If the fault isolation requirement is not met, the Comanche is unlikely to achieve its MTTR requirement, thereby adversely affecting the Army’s ability to execute its maintenance concept and its wartime availability goals. Contractor team officials stated the fault isolation requirement was very optimistic, and although they are striving to meet this requirement, it may eventually have to be changed. As of January 1995, the contractor team predicted the system could achieve an overall 69-percent fault isolation rate; however, this rate would not meet the specification for mechanical and electrical component fault isolation. There are design limitations on two components, according to the program office, and changes to bring these components into conformance with specifications would be costly and increase the weight of the aircraft. Therefore, as of January 1995, the contractor team and the program office have agreed not to take action on these components. The Army established a requirement of a 1-percent false removal rate for the Comanche. A false removal occurs when a part removed from the aircraft shows no evidence of failure when tested. This requirement is dependent, to a large extent, on the success of the fault detection/isolation system in detecting and isolating failed components. Program personnel characterize the 1-percent requirement as stringent and one that will be challenging to achieve. An Army Materiel Systems Analysis Activity official believes some design improvements have occurred in this area, but the risk associated with achieving this requirement still remains high. If the Comanche does not meet this requirement, estimated operating and support costs for the Comanche will be higher than previously predicted. The Army has not had good experience in developing fault detection/isolation and false removal systems for other aircraft. In September 1990, we reported that the fault detection and isolation system on the Apache aircraft did not always accurately detect the component that caused a particular fault, and the system detected faults that did not actually exist about 40 percent of the time. As a result, Apache maintainers had to perform additional work to locate failed components. Recently, through a reliability program, the false removal rate for the targeting and night vision systems on the Apache improved to about 10 to 15 percent, according to Army officials. This is still significantly higher than the 1-percent requirement established for the Comanche program. Although the program is experiencing technical problems, it is currently meeting its goals of reducing maintenance levels and keeping overall weight growth within acceptable limits for the Comanche. The Army’s maintenance concept for the Comanche program is predicated on two levels of maintenance—unit- and depot-level maintenance. This concept is important to achieving operating and support savings predicted for the program because it eliminates the intermediate level of maintenance. Unit-level maintenance entails removing and replacing components required to return the aircraft to a serviceable condition. Depot-level maintenance requires higher level maintenance skills and sophisticated capital equipment and facilities not found at the unit level. The Army traditionally uses a three-level maintenance concept that includes intermediate-level maintenance to handle component repairs. Intermediate-level maintenance is usually located close to the battalion. It is performed on components that cannot be easily repaired at the unit level and do not require the more sophisticated repairs done at the depot level. As of January 1995, no Comanche component had been designated for repair at the intermediate level, according to the program office. Contractor team personnel are conducting repair level analysis on Comanche components to determine whether components should be repaired at unit, intermediate, or depot facilities, according to program and contractor team officials. Any candidates identified for intermediate-level repair are reviewed for possible design changes that could allow maintenance at the unit or depot level. If economically feasible, the contractor team will make design changes to the component to preclude the need for intermediate-level repair. As of February 7, 1995, the Comanche’s empty weight increased from its original specification of 7,500 pounds to 7,883 pounds. Although the Comanche’s weight continues to increase, it remains within the allowable design limit of 7,997 pounds. Weight increases affect vertical rate of climb performance on the Comanche. The Army established a limit of 500 feet-per-minute as the minimum acceptable vertical rate of climb performance. If the Comanche’s weight exceeds 8,231 pounds, the engine will have to be redesigned to produce enough power at 95 percent maximum rated engine power to sustain the minimum 500 feet-per-minute vertical rate of climb requirement. We recommend that the Secretary of Defense require the Army to complete operational testing to validate the Comanche’s operational effectiveness and suitability before committing any funds to acquire long-lead production items or enter low-rate initial production. DOD generally concurred with the findings and original recommendations in our draft report. In commenting on the draft report, DOD offered explanations about why the problems that we identified were occurring and what they were doing to fix those problems. DOD disagreed with the report’s conclusion about false removals and stated that we had not presented any evidence that the Comanche’s 1-percent false removal rate may not be achievable. We still believe that the false removal goal is high risk and adjusted the report to more clearly reflect our concern. Regarding our draft report recommendation that DOD develop program fixes that achieve program goals and reduce the risks we identified, DOD concurred and noted that the approved restructuring will significantly reduce risk. DOD concurred with our other draft recommendation not to commit production funds to the program until performance and mission requirements are met and noted that the program would be reviewed by DOD before approving the Army’s request to proceed to the engineering and manufacturing development phase—the Milestone II decision scheduled for October 2001. Because DOD concurred in our draft report recommendations and is taking action on them, we are no longer including them in this report. However, our analysis of information on the restructuring obtained after we had submitted our draft report to DOD has further heightened our concerns about the risk of concurrency; therefore, we have revised the report and added a new recommendation. Under the stretched out, restructured Comanche program, operational testing is not even scheduled to begin until after the low-rate initial production decision is made. This approach continues the risks associated with making production decisions before knowing whether the aircraft will be able to perform as required. Prior to the restructure, the Army planned to start operational testing with eight aircraft in May 2003. Under the restructured program, the Army plans to start operational testing with six helicopters by about August 2005. We believe that the stretched out time frame and the six aircraft acquired under the restructure provide sufficient time and aircraft to operationally test the Comanche prior to making any production decisions. Additionally, because operational testing is not scheduled until about August 2005, DOD will not be in a position at Milestone II in October 2001 to adequately address whether the Comanche program is meeting its performance requirements. DOD’s comments are presented in their entirety in appendix I, along with our evaluation. To assess cost changes, software development, maintainability, and weight growth issues, we reviewed program documents and interviewed officials from the Department of the Army headquarters, Washington, D.C.; the Comanche Program Manager’s Office, St. Louis, Missouri; the U.S. Army Materiel Systems Analysis Activity, Aberdeen Proving Ground, Maryland; the Ada Validation Facility, Wright-Patterson Air Force Base, Ohio; and the Office of the Assistant Secretary of Defense for Program Analysis and Evaluation, Washington, D.C. We also reviewed program documents and interviewed contractor and Defense Plant Representative Office officials at the Boeing Helicopter Company, Philadelphia, Pennsylvania; the Sikorsky Aircraft Corporation, Stratford, Connecticut; and the Comanche Joint Program Office, Trumbull, Connecticut. We conducted our review between August 1994 and February 1995 in accordance with generally accepted government auditing standards. We are also sending copies of this report to the Chairmen and Ranking Minority Members of the Senate and House Committees on Appropriations, the Senate Committee on Governmental Affairs, and the House Committee on Government Reform and Oversight; the Director, Office of Management and Budget; and the Secretaries of Defense and the Army. We will also provide copies to others upon request. This report was prepared under the direction of Thomas J. Schulz, Associate Director, Systems Development and Production Issues. Please contact Mr. Schulz at (202) 512-4841 if you or your staff have any questions concerning this report. Other major contributors to this report are listed in appendix II. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated April 20, 1995. 1. As DOD’s comments note, there are many measures of unit cost, such as average unit flyaway cost, program acquisition unit cost, and unit procurement cost. We believe that the program unit cost that we used in the report—which the footnote in the report defines as total research, development, and acquisition costs in current dollars—is as valid as flyaway cost to portray program cost growth over time. We have adjusted the report to more clearly define the basis of the unit cost we use. 2. These comments are dealt with on pages 11 and 12 of the report and in our responses to the specific DOD comments that follow. Report material on costs and concurrency has been revised to reflect information obtained after our fieldwork had been concluded. 3. The report does not say that maintainability goals will never be met. We pointed out that some key maintainability requirements are not being met and, therefore, there is a risk that the Army may not achieve the lower operating and support costs and wartime availability goals that it has established for this program. We also said that individually, failure to meet these parameters may not be a significant problem; however, collectively they affect the ability of the Comanche to achieve the cost and availability goals. This point is clearly illustrated in DOD’s comments on the failure of the fault isolation system. According to DOD, “Fault isolation is one of the key diagnostic system requirements. The DOD agrees that if the fault isolation requirement is not met, the Comanche is unlikely to achieve its mean-time-to-repair requirement, . . .”. 4. We still believe that this goal is very aggressive. DOD acknowledges that this goal is stringent and the Army has not had good experience in the past with false removals on other aircraft. Additionally, as noted in the report, Army Materiel Systems Analysis Activity said the risk associated with achieving this requirement remains high. We changed the section heading to emphasize the high risk. Gary L. Billen Robert D. Spence Lauri A. Bischof Michael W. Buell Karen A. Rieger The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Army's Comanche helicopter program, focusing on cost and technical issues associated with the restructured program. GAO found that: (1) the past risks associated with the Comanche's development and production will continue under the Army's restructured program; (2) production decisions will be made before operational testing of the Comanche begins and the development phase will be extended beyond fiscal year 2002; (3) the acquisition of six additional aircraft will allow the Army to conduct operational testing before committing funds to any further production decisions; (4) the Comanche's unit costs have tripled in the last 10 years due to program restructuring and a 74-percent decrease in procurement quantities; (5) the Comanche may not meet its wartime availability and operating cost requirements due to technical problems; and (6) the Comanche program is currently meeting its maintenance requirements and weight growth limits.
Available data and interviews with lenders and other mortgage industry participants indicate that appraisals are the most frequently used valuation method for home purchase and refinance mortgage originations. Appraisals provide an opinion of market value at a point in time and reflect prevailing economic and housing market conditions. Data provided to us by the five largest lenders (measured by dollar volume of mortgage originations in 2010) show that, for the first-lien residential mortgages for which data were available, these lenders obtained appraisals for about 90 percent of the mortgages they made in 2009 and 2010, including 98 percent of home purchase mortgages. The data we obtained from lenders include mortgages sold to the enterprises and mortgages insured by the Federal Housing Administration (FHA), which together accounted for the bulk of the mortgages originated in 2009 and 2010. The enterprises and FHA require appraisals to be performed for a large majority of the mortgages they purchase or insure. For mortgages for which an appraisal was not done, the lenders we spoke with reported that they generally relied on validation of the sales price (or loan amount in the case of a refinance) against a value generated by an automated valuation model (AVM), in accordance with enterprise policies that permit this practice for some mortgages with characteristics associated with a lower default risk. The enterprises, FHA, and lenders require and obtain appraisals for most mortgages because appraising is considered by mortgage industry participants to be the most credible and reliable valuation method for a number of reasons. Most notably, appraisals and appraisers are subject to specific requirements and standards. In particular, the Uniform Standards of Professional Appraisal Practice (USPAP) outlines the steps appraisers must take in developing appraisals and the information appraisal reports must contain. USPAP also requires that appraisers follow standards for ethical conduct and have the competence needed for a particular assignment. Furthermore, state licensing and certification requirements for appraisers include minimum education and experience criteria, and standardized report forms provide a way to report relevant appraisal information in a consistent format. In contrast, other valuation methods, such as broker price opinions (BPO) and AVMs, are not permitted for most purchase and refinance mortgage originations. The enterprises do not permit lenders to use BPOs for mortgage originations and only permit lenders to use AVMs for a modest percentage of mortgages they purchase. Additionally, the federal banking regulators’ guidelines state that BPOs and AVMs cannot be used as the primary basis for determining property values for mortgages originated by regulated institutions. However, the enterprises and lenders use BPOs and AVMs in a number of circumstances other than purchase and refinance mortgage originations because these methods can provide quicker, less expensive means of valuing properties in active markets. When performing appraisals, appraisers can use one or more of three approaches to value—sales comparison, cost, and income. The sales comparison approach compares and contrasts the property under appraisal with recent offerings and sales of similar properties. The cost approach is based on an estimate of the value of the land plus what it would cost to replace or reproduce the improvements minus depreciation. The income approach is an estimate of what a prudent investor would pay based upon the net income the property produces. USPAP requires appraisers to consider which approaches to value are applicable and necessary to perform a credible appraisal and provide an opinion of the market value of a particular property. Appraisers must then reconcile values produced by the different approaches they use to reach a value conclusion. The enterprises and FHA require that, at a minimum, appraisers use the sales comparison approach for all appraisals because it is considered most applicable for estimating market value in typical mortgage transactions. Consistent with these policies, our review of valuation data that we obtained from a mortgage technology company—representing about 20 percent of mortgage originations in 2010—indicates that appraisers used the sales comparison approach for nearly all (more than 99 percent) of the mortgages covered by these data. The cost approach, which was generally used in conjunction with the sales comparison approach, was used somewhat less often—in approximately two-thirds of the transactions in 2009 and 2010, according to these data. The income approach was rarely used. Some mortgage industry stakeholders have argued that wider use of the cost approach in particular could help mitigate what they view as a limitation of the sales comparison approach. They told us that reliance on the sales comparison approach alone can lead to market values rising to unsustainable levels and that using the cost approach as a check on the sales comparison approach could help lenders and appraisers identify when this is happening. For example, these stakeholders pointed to a growing gap between average market values and average replacement costs of properties as the housing bubble developed in the early to mid-2000s. However, other mortgage industry participants noted that a rigorous application of the cost approach may not generate values much different from those generated using the sales comparison approach. They indicated, for example, that components of the cost approach—such as land value or profit margins of real estate developers—can grow rapidly in housing markets where sales prices are increasing. The data we obtained did not allow us to analyze the differences between the values appraisers generated using the different approaches. Factors such as the location and complexity of the property affect consumer costs for appraisals. For example, a property may have unique characteristics that are more difficult to value, such as being much larger than nearby properties or being an oceanfront property, which may require the appraiser to take more time to gather and analyze data to produce a credible appraisal. Mortgage industry participants we spoke with told us that the amount a consumer pays for an appraisal is generally not affected by whether the lender engages an appraiser directly or uses an appraisal management company (AMC)—which manages the appraisal process on lenders’ behalf—to select an appraiser. They said that AMCs typically charge lenders about the same amount that independent fee appraisers would charge lenders directly, and lenders generally pass on these charges to consumers. In general, lenders, AMC officials, appraisers, and other industry participants noted that consumer costs for appraisals have remained relatively stable in the past several years. However, appraisers have reported receiving lower fees when working with AMCs compared with working directly with lenders because AMCs keep a portion of the total fee. A provision in the Dodd-Frank Act that requires lenders to pay appraisers a customary and reasonable fee could affect consumer costs and appraisal quality, depending on interpretation and implementation of federal rules. The effect of this change on consumer costs may depend on the approach lenders and AMCs take in order to demonstrate compliance. For example, some lenders and industry groups are having fee studies done to determine what constitutes customary and reasonable fees. According to the Dodd-Frank Act, these studies cannot include the fees AMCs pay to appraisers. As a result, some industry participants, including some AMC officials, expect these studies to demonstrate that appraiser fees should be higher than what AMCs are currently paying. If that is the case, these lenders would require AMCs to increase the fees they pay to appraisers to a rate consistent with the findings of those studies, which in turn could increase appraisal costs for consumers. However, some lenders are evaluating the possibility of no longer using AMCs and engaging appraisers directly, which would eliminate the AMC administration fee from the appraisal fee that consumers pay. Other recent policy changes that took effect in 2010 aim to provide lenders with a greater incentive to estimate costs accurately when providing consumers with an estimated price for third-party settlement services, including appraisals. If actual costs exceed estimated costs by more than 10 percent, the lender is responsible for making up the difference. The Dodd-Frank Act permits, but does not require, lenders to separately disclose to consumers the fee paid to the appraiser by an AMC and the administration fee charged by the AMC. Another policy change enhances disclosures by requiring lenders to provide consumers with a copy of the valuation report prior to closing. Recently issued policies reinforce long-standing requirements and guidance designed to address conflicts of interest that may arise when direct or indirect personal interests bias appraisers from exercising their independent professional judgment. In order to prevent appraisers from being pressured, the federal banking regulators, the enterprises, FHA, and other agencies have regulations and policies governing the selection of, communications with, and coercion of appraisers. Examples of recently issued policies that address appraiser independence include HVCC, which took effect in May 2009; the enterprises’ new appraiser independence requirements that replaced HVCC in October 2010; and revised Interagency Appraisal and Evaluation Guidelines from the federal banking regulators, which were issued in December 2010. Provisions of these and other policies address (1) prohibitions against loan production staff involvement in appraiser selection and supervision; (2) prohibitions against third parties with an interest in the mortgage transaction, such as real estate agents or mortgage brokers, selecting appraisers; (3) limits on communications with appraisers; and (4) prohibitions against coercive behaviors. According to mortgage industry participants, HVCC and other factors have contributed to changes in appraiser selection processes—in particular, lenders’ more frequent use of AMCs to select appraisers. Some appraisal industry participants said that HVCC, which required additional layers of separation between loan production staff and appraisers for mortgages sold to the enterprises, led some lenders to outsource appraisal functions to AMCs because they thought using AMCs would allow them to easily demonstrate compliance with these requirements. In addition, lenders and other mortgage industry participants told us that market conditions, including an increase in the number of mortgages originated during the mid-2000s, and lenders’ geographic expansion over the years, put pressure on lenders’ capacity to manage appraisers and led to their reliance on AMCs. Greater use of AMCs has raised questions about oversight of these firms and their impact on appraisal quality. Direct federal oversight of AMCs is limited. Federal banking regulators’ guidelines for lenders’ own appraisal functions list standards for appraiser selection, appraisal review, and reviewer qualifications. The guidelines also require lenders to establish processes to help ensure these standards are met when lenders outsource appraisal functions to third parties, such as AMCs. Officials from the federal banking regulators told us they review lenders’ policies and controls for overseeing AMCs, including the due diligence they perform when selecting AMCs. However, they told us they generally do not review an AMC’s operations directly unless they have serious concerns about the AMC and the lender is unable to address those concerns. In addition, a number of states began regulating AMCs in 2009, but the regulatory requirements vary and provide somewhat differing levels of oversight, according to officials from several state appraiser regulatory boards. Some appraiser groups and other appraisal industry participants have expressed concern that existing oversight may not provide adequate assurance that AMCs are complying with industry standards. These participants suggested that the practices of some AMCs for selecting appraisers, reviewing appraisal reports, and establishing qualifications for appraisal reviewers—key areas addressed in federal guidelines for lenders’ appraisal functions—may have led to a decline in appraisal quality. For example, appraiser groups said that some AMCs select appraisers based on who will accept the lowest fee and complete the appraisal report the fastest rather than on who is the most qualified, has the appropriate experience, and is familiar with the relevant neighborhood. AMC officials we spoke with said that they have processes that address these areas of concern—for example, using an automated system that identifies the most qualified appraiser based on the requirements for the assignment, the appraiser’s proximity to the subject property, and performance metrics such as the timeliness and quality of the appraiser’s work. While the impact of the increased use of AMCs on appraisal quality is unclear, Congress recognized the importance of additional AMC oversight in enacting the Dodd-Frank Act by placing the supervision of AMCs with state appraiser regulatory boards. The Dodd-Frank Act requires the federal banking regulators, FHFA, and the Bureau of Consumer Financial Protection to establish minimum standards for states to apply in registering AMCs, including requirements that appraisals coordinated by an AMC comply with USPAP and be conducted independently and free from inappropriate influence and coercion. This rulemaking provides a potential avenue for reinforcing existing federal requirements for key functions that may impact appraisal quality, such as selecting appraisers, reviewing appraisals, and establishing qualifications for appraisal reviewers. Such reinforcement could help to provide greater assurance to lenders, the enterprises, and federal agencies of the quality of the appraisals provided by AMCs. To help ensure more consistent and effective oversight of the appraisal industry, the report we are issuing today recommends that the heads of the federal banking regulators (FDIC, the Federal Reserve, NCUA, and OCC), FHFA, and the Bureau of Consumer Financial Protection—as part of their joint rulemaking required under the Dodd-Frank Act—consider including criteria for the selection of appraisers for appraisal orders, review of completed appraisals, and qualifications for appraisal reviewers when developing minimum standards for state registration of AMCs. In written comments on a draft of our report, the federal banking regulators and FHFA agreed with or indicated they will consider this recommendation. The Bureau of Consumer Financial Protection did not receive the draft report in time to provide comments. Chairman Biggert, Ranking Member Gutierrez, and Members of the Subcommittee, this concludes my prepared statement. I am happy to respond to any questions you may have at this time. For further information on this testimony, please contact me at (202) 512- 8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this testimony include Steve Westley, Assistant Director; Don Brown; Marquita Campbell; Anar Ladhani; John McGrail; Erika Navarro; Jennifer Schwartz; and Andrew Stavisky. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses our work on residential real estate valuations. Real estate valuations, which encompass appraisals and other value estimation methods, play a critical role in mortgage underwriting by providing evidence that the market value of a property is sufficient to help mitigate losses if the borrower is unable to repay the loan. However, recent turmoil in the mortgage market has raised questions about mortgage underwriting practices, including the quality and credibility of some valuations. An investigation into industry appraisal practices by the New York State Attorney General led to an agreement in 2008 between the Attorney General; Fannie Mae and Freddie Mac (the enterprises); and the Federal Housing Finance Agency (FHFA), which regulates the enterprises. This agreement included the Home Valuation Code of Conduct (HVCC), which set forth certain appraiser independence requirements for loans sold to the enterprises and took effect in 2009. The Dodd-Frank Wall Street Reform and Consumer Protection Act (Pub. L. No. 111-203) (the Dodd-Frank Act) directed us to study the effectiveness and impact of various valuation methods and the options available for selecting appraisers, as well as the impact of HVCC. This testimony summarizes the report we are releasing today, which responds to the mandate in the Dodd-Frank Act. Our work focused on valuations of single-family residential properties for first-lien purchase and refinance mortgages. The report discusses (1) the use of different valuation methods and their advantages and disadvantages, (2) policies and other factors that affect consumer appraisal costs and requirements for lenders to disclose appraisal costs and valuation reports to consumers, and (3) conflict-of-interest and appraiser selection policies and views on the impact of these policies on industry stakeholders and appraisal quality. We consider the impact of HVCC throughout the report.. Available data and interviews with lenders and other mortgage industry participants indicate that appraisals are the most frequently used valuation method for home purchase and refinance mortgage originations. Appraisals provide an opinion of market value at a point in time and reflect prevailing economic and housing market conditions. Data provided to us by the five largest lenders (measured by dollar volume of mortgage originations in 2010) show that, for the first-lien residential mortgages for which data were available, these lenders obtained appraisals for about 90 percent of the mortgages they made in 2009 and 2010, including 98 percent of home purchase mortgages. The data we obtained from lenders include mortgages sold to the enterprises and mortgages insured by the Federal Housing Administration (FHA), which together accounted for the bulk of the mortgages originated in 2009 and 2010. The enterprises and FHA require appraisals to be performed for a large majority of the mortgages they purchase or insure. For mortgages for which an appraisal was not done, the lenders we spoke with reported that they generally relied on validation of the sales price (or loan amount in the case of a refinance) against a value generated by an automated valuation model (AVM), in accordance with enterprise policies that permit this practice for some mortgages with characteristics associated with a lower default risk. Factors such as the location and complexity of the property affect consumer costs for appraisals. For example, a property may have unique characteristics that are more difficult to value, such as being much larger than nearby properties or being an oceanfront property, which may require the appraiser to take more time to gather and analyze data to produce a credible appraisal. Mortgage industry participants we spoke with told us that the amount a consumer pays for an appraisal is generally not affected by whether the lender engages an appraiser directly or uses an appraisal management company (AMC)--which manages the appraisal process on lenders' behalf--to select an appraiser. They said that AMCs typically charge lenders about the same amount that independent fee appraisers would charge lenders directly, and lenders generally pass on these charges to consumers. In general, lenders, AMC officials, appraisers, and other industry participants noted that consumer costs for appraisals have remained relatively stable in the past several years. However, appraisers have reported receiving lower fees when working with AMCs compared with working directly with lenders because AMCs keep a portion of the total fee. Recently issued policies reinforce long-standing requirements and guidance designed to address conflicts of interest that may arise when direct or indirect personal interests bias appraisers from exercising their independent professional judgment. In order to prevent appraisers from being pressured, the federal banking regulators, the enterprises, FHA, and other agencies have regulations and policies governing the selection of, communications with, and coercion of appraisers. Examples of recently issued policies that address appraiser independence include HVCC, which took effect in May 2009; the enterprises' new appraiser independence requirements that replaced HVCC in October 2010; and revised Interagency Appraisal and Evaluation Guidelines from the federal banking regulators, which were issued in December 2010. Provisions of these and other policies address (1) prohibitions against loan production staff involvement in appraiser selection and supervision; (2) prohibitions against third parties with an interest in the mortgage transaction, such as real estate agents or mortgage brokers, selecting appraisers; (3) limits on communications with appraisers; and (4) prohibitions against coercive behaviors.
U. S. critical infrastructure is made of systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on the nation’s security, national economic security, national public health or safety, or any combination of these matters. Critical infrastructure includes, among other things, banking and financing institutions, telecommunications networks, and energy production and transmission facilities, most of which are owned and operated by the private sector. Sector-specific agencies (SSA) are federal departments or agencies with responsibility for providing institutional knowledge and specialized expertise as well as leading, facilitating, or supporting the security and resilience programs and associated activities of its designated critical infrastructure sector in the all-hazards environment. Threats to systems supporting critical infrastructure are evolving and growing. Cyber threats can be unintentional or intentional. Unintentional or non-adversarial threats include equipment failures, software coding errors, and the actions of poorly trained employees. They also include natural disasters and failures of critical infrastructure on which the organization depends but are outside of its control. Intentional threats include both targeted and untargeted attacks from a variety of sources, including criminal groups, hackers, disgruntled employees, foreign nations engaged in espionage and information warfare, and terrorists. These threat adversaries vary in terms of the capabilities of the actors, their willingness to act, and their motives, which can include seeking monetary gain or seeking an economic, political, or military advantage. Table 1 describes the sources of cyber-based threats in more detail. Cyber threat adversaries make use of various techniques, tactics, and practices, or exploits, to adversely affect an organization’s computers, software, or networks, or to intercept or steal valuable or sensitive information. These exploits are carried out through various conduits, including websites, e-mail, wireless and cellular communications, Internet protocols, portable media, and social media. Further, adversaries can leverage common computer software programs, such as Adobe Acrobat and Microsoft Office, to deliver a threat by embedding exploits within software files that can be activated when a user opens a file within its corresponding program. Table 2 provides descriptions of common exploits or techniques, tactics, and practices used by cyber adversaries. Reports of cyber exploits illustrate the debilitating effects such attacks can have on the nation’s security, economy, and on public health and safety. In May 2015, media sources reported that data belonging to 1.1 million health insurance customers in the Washington, D.C., area were stolen in a cyber attack on a private insurance company. Attackers accessed a database containing names, birth dates, e-mail addresses, and subscriber ID numbers of customers. In December 2014, the Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) issued an updated alert on a sophisticated malware campaign compromising numerous industrial control system environments. Their analysis indicated that this campaign had been ongoing since at least 2011. In the January 2014 to April 2014 release of its Monitor Report, ICS- CERT reported that a public utility had been compromised when a sophisticated threat actor gained unauthorized access to its control system network through a vulnerable remote access capability configured on the system. The incident highlighted the need to evaluate security controls employed at the perimeter and ensure that potential intrusion vectors are configured with appropriate security controls, monitoring, and detection capabilities. Federal policy and public-private plans establish roles and responsibilities for federal agencies working with the private sector and other entities to enhance the cyber and physical security of public and private critical infrastructures. These include PPD-21 and the NIPP. PPD-21 shifted the nation’s focus from protecting critical infrastructure against terrorism toward protecting and securing critical infrastructure and increasing its resilience against all hazards, including natural disasters, terrorism, and cyber incidents. The directive identified 16 critical infrastructure sectors and designated associated federal SSAs. Table 3 shows the 16 critical infrastructure sectors and the SSA for each sector. PPD-21 identified SSA roles and responsibilities to include collaborating with critical infrastructure owners and operators; independent regulatory agencies, where appropriate; and with state, local, tribal, and territorial entities as appropriate; serving as a day-to-day federal interface for the prioritization and coordination of sector-specific activities; carrying out incident management responsibilities consistent with statutory authority and other appropriate policies, directives, or regulations; and providing, supporting, or facilitating technical assistance and consultations for their respective sector to identify vulnerabilities and help mitigate incidents, as appropriate. The NIPP is to provide the overarching approach for integrating the nation’s critical infrastructure protection and resilience activities into a single national effort. DHS developed the NIPP in collaboration with public and private sector owners and operators and federal and nonfederal government representatives, including sector-specific agencies, from the critical infrastructure community. It details DHS’s roles and responsibilities in protecting the nation’s critical infrastructures and how sector stakeholders should use risk management principles to prioritize protection activities within and across sectors. It emphasizes the importance of collaboration, partnering, and voluntary information sharing among DHS and industry owners and operators, and state, local, and tribal governments. The NIPP also stresses a partnership approach among the federal and state governments and industry stakeholders for developing, implementing, and maintaining a coordinated national effort to manage the risks to critical infrastructure and work toward enhancing physical and cyber resilience and security. According to the NIPP, SSAs are to work with their private sector counterparts to understand cyber risk and develop sector-specific plans that address the security of the sector’s cyber and other assets and functions. The SSAs and their private sector partners are to update their sector-specific plans based on DHS guidance to the sectors. The currently available sector-specific plans were released in 2010 to support the 2009 version of the NIPP. In response to the most recent NIPP, released in December 2013, DHS issued guidance in August 2014 directing the SSAs, in coordination with their sector stakeholders, to update their sector-specific plans. The SSAs are also to review and modify existing and future sector efforts to ensure that cyber concerns are fully integrated into sector security activities. In addition, the NIPP sets up a framework for sharing information across and between federal and nonfederal stakeholders within each sector that includes the establishment of sector coordinating councils and government coordinating councils. Sector coordinating councils are to serve as a voice for the sector and a principal entry point for the government to collaborate with the sector for critical infrastructure security and resilience activities. The government coordinating councils enable interagency, intergovernmental, and cross-jurisdictional coordination within and across sectors. Each government coordinating council is chaired by a representative from the designated SSA with responsibility for providing cross-sector coordination. The NIPP also recommended several activities—referred to as Call to Action steps— to guide the efforts of the SSAs and their sector partners to advance security and resilience under three broad activity categories: building on partnership efforts; innovating in risk management; and focusing on outcomes. Table 4 shows the 10 Call to Action Steps determined to have a cybersecurity-related nexus. The NIPP states that all of the identified steps, including these 10 actions with a greater relationship to enhancing cybersecurity, are not intended to be exhaustive or implemented in every sector. Rather, they are to provide strategic direction, allow for differing priorities in each sector, and enable continuous improvement of security and resilience efforts. In addition, Executive Order 13636 was issued to, among other things, address the need to improve cybersecurity through information sharing and collaboratively developing and implementing risk-based standards. It called for the SSAs to, among other things, establish, in coordination with DHS, a voluntary program to support the adoption of the National Institute of Standards and Technology’s (NIST) Framework for Improving Critical Infrastructure Cybersecurity (Cybersecurity Framework) by owners and operators of critical infrastructure and any other interested entities; create incentives to encourage owners and operators of critical infrastructure to participate in the voluntary program; and, if necessary, develop implementation guidance or supplemental materials to address sector-specific risks and operating environments. Sector-specific agencies determined the significance of cyber risk to the networks and industrial control systems for all 15 of the sectors in the scope of our review. Specifically, they determined that cyber risk was significant for 11 of 15 sectors. For the remaining 4 sectors, the SSAs had determined that cyber risks were not significant due to the lack of cyber dependence in the sector’s operations, among other reasons. These determinations were carried out in response to the 2009 NIPP, which directed the SSAs to consider how cyber would be prioritized among their sectors’ critical infrastructure and key resources as part of the sector- specific planning process. The SSAs and their sector stakeholders were to include an overview of current and emerging sector risks including those affecting cyber when preparing their 2010 plans. Table 5 shows the significance of cyber risk to each sector, as determined by the SSAs, as well as when these determinations were made. Since most of these determinations were made for the 2010 sector- specific planning process, they may not reflect the current risk environment of the sectors. In particular, SSAs for the 4 sectors that had not determined cyber risks to be significant during their 2010 sector- specific planning process subsequently reconsidered the significance of cyber risks to their sectors. Also, in response to the 2013 NIPP, DHS issued guidance for developing updated sector-specific plans for 2015. According to this guidance and SSA officials, SSAs are to document how they have reconsidered the significance of cyber risks to their sectors. DHS officials stated that the SSAs have drafted their updated sector- specific plans and submitted them to DHS for review; however, the plans have not yet been finalized and released. Based on the 2010 sector-specific plans and subsequent documents and activities, the SSAs’ determinations of the significance of cyber risk to their 15 respective sectors are summarized below. DHS, in collaboration with chemical sector stakeholders, determined that cyber risk was a significant priority for the sector. In 2009, DHS and the chemical sector coordinating council issued the Roadmap to Secure Controls Systems in the Chemical Sector, which documented the cybersecurity concerns for chemical facilities’ industrial control systems and the need to develop cyber risk mitigation actions to be addressed over a 10-year period. In addition, the 2010 Chemical Sector-Specific Plan highlighted the importance of cyber systems to the sector and promoted the need for owners and operators of sector assets to apply risk assessment and management methodologies to identify cyber threats to their individual operations. DHS did not consider cyber risks to be significant for the commercial facilities sector. The commercial facilities sector’s 2010 sector-specific plan does not identify cyber risks as significant to the sector. DHS officials stated that the decision was based on the sector’s diversity of components and the manner in which cyber-related technology is employed. According to these officials, a cyber event affecting one facility’s cyber systems (e.g., access control or environmental systems) would not be likely to affect the cyber assets of other facilities within the sector. However, in July 2015, DHS officials stated that, as part of the updated sector planning process, they had recognized cyber risk as a high-priority concern for the sector. In particular, they noted that the sector uses Internet-connected systems for processes like ticketing and reservations, so a large-scale communications failure or cyber attack could disrupt the sector’s operations. DHS, in collaboration with communications sector stakeholders, completed a risk assessment in 2012 for the communications sector that identified cyber risk as a significant priority; however, the assessment noted that due to the sector’s diversity and level of resiliency, most of the threats would only result in local or regional communications disruptions or outages. The assessment evaluated cyber threats such as malicious and non-malicious actors committing alterations or intrusions that could pose local, regional, or national level risks to broadcasting, cable, satellite, wireless, and wireline communications networks. The risk assessment also concluded that malicious actors could use the communications sector to attack other sectors. DHS did not consider cyber risk to be significant for the critical manufacturing sector. The sector’s 2010 sector-specific plan stated that many critical manufacturing owners and operators from this diverse and dispersed sector had completed asset, system, or network-specific assessments on their own initiative. Also, the plan identified cyber elements that support the sector’s functional areas, including electronic systems for processing the information necessary for management and operation or for automatic control of physical processes in manufacturing. This applied primarily to the production of metals, machinery, electrical equipment, and heavy equipment. However, the critical manufacturing sector relies upon other sectors such as communications and information technology where addressing cyber risk is a priority. DHS officials stated that, since 2010, they have identified sector critical cyber functions and services, and the sector’s draft 2015 sector-specific plan notes this as a step toward conducting a sector-wide cyber risk assessment. DHS officials considered cyber risks for the dams sector and acknowledged that cyber threats could have negative consequences; however, they determined cyber risks to not be significant for the sector. Specifically, the sector’s 2010 sector-specific plan concluded that the sector’s cyber environment and its legacy industrial control systems were designed to operate in a fairly isolated environment using proprietary software, hardware, and communications technology and, as a result, were designed with cybersecurity as a low priority. However, the officials stated that vulnerabilities in industrial control systems pose cyber-related risks to the sector’s operations. In the sector-specific plan, they acknowledged that the evolution of industrial control systems to incorporate network-based and Internet Protocol-addressable features and more commercially available technologies could introduce many of the same vulnerabilities that exist in current networked information systems. DHS officials also stated that they are addressing cybersecurity for the sector with their update to the sector-specific plan and the sector’s roadmap for securing control systems, as well as with the development of a capability maturity model specifically for the dams sector. At the time of our review, the updated sector-specific plan was still in draft. The Department of Defense (DOD) determined that cyber threats to contractors’ unclassified information systems represented an unacceptable risk of compromise to DOD information and posed a significant risk to U.S. national security and economic security interests. In the sector’s 2010 sector-specific plan, DOD, in collaboration with its sector partners, listed cybersecurity and managing risk to information among its five goals for the sector’s protection and resilience. In addition, DOD has issued annual “for official use only” reports on its progress defending DOD and the defense industrial base against cyber events for fiscal years 2010 through 2014. The reports identify definitions and categories of cyber events, exploited vulnerabilities, and adversary intrusion methods based on data from several key DOD organizations with cybersecurity responsibilities and other intelligence sources. The reports are to provide an annual update of cyber threats, threat sources, and vulnerability trends affecting the defense industrial base. DHS officials, in collaboration with sector stakeholders, concluded that cyber threats could have a significant impact on the emergency services sector’s operations. The risk assessment process brought together subject matter experts to perform an assessment of cyber risks across six emergency services sector disciplines: law enforcement, fire and emergency services, emergency medical services, emergency management, public works, and public safety communications and coordination/fusion. They developed cyber risk scenarios across multiple sector disciplines and applied DHS’s Cybersecurity Assessment and Risk Management Approach methodology to reach their conclusion. The results were reported in 2012 in the Emergency Services Sector Cyber Risk Assessment. In a previous GAO review of cybersecurity in the emergency services sector, we reported that sector planning activities, including the cyber risk assessment, did not address the more interconnected, Internet-based emerging technologies becoming more prevalent in the emergency services sector. As a result, the sector could be vulnerable to cyber risks in the future without more comprehensive planning. We recommended that the Secretary of Homeland Security collaborate with emergency services sector stakeholders to address the cybersecurity implications of implementing technology initiatives in related plans. DHS agreed with our recommendation and stated that the updated sector-specific plan will include consideration of the sector’s emerging technology. At the time of our review, the updated sector-specific plan was still in draft. The Department of Energy (DOE) identified cyber risks as significant and a priority for the energy sector. Specifically, in the sector’s 2010 sector- specific plan, DOE, in collaboration with its sector stakeholders, included cybersecurity among the sector’s goals to enhance preparedness, security, and resilience. DOE officials stressed that their risk management approach focuses on resilience, especially in the context of ensuring the resilience of the electric grid. In addition, the 2011 Roadmap to Achieve Energy Delivery System Cybersecurity, developed by energy sector stakeholders, including responsible DOE officials, recognized the continually evolving cyber threats and vulnerabilities and provided a framework for energy sector stakeholders to survive a cyber incident while sustaining critical functions. Treasury, in collaboration with sector stakeholders, identified cyber risk as significant to the financial services sector. Specifically, the 2010 financial services sector-specific plan stated that all of the sector’s services rely on its cyber infrastructure, which necessitates that cybersecurity be factored into all of the sector’s critical infrastructure protection activities. In addition, as a highly regulated sector, the financial services sector has been required to undergo risk assessments by financial regulators to satisfy regulatory requirements. In July 2015, Treasury officials stated that they leveraged the collective body of risk assessment data to determine the sector’s overall risk profile, which will be included in the 2015 sector-specific plan. At the time of our review, the updated sector-specific plan was still in draft. The U.S. Department of Agriculture (USDA) and the Department of Health and Human Services’ Food and Drug Administration (FDA), in collaboration with their sector stakeholders, determined that the significance of cyber risk was low for the food and agriculture sector when the SSP was developed in 2010. As stated in the plan, the sector did not perceive itself as a target of cyber attack and concluded that, based on the nature of its operations, a cyber attack would pose the risk of only minimal economic disruption. However, the plan acknowledged the rapidly evolving cyber environment and the need to revisit the issue in the future. In July 2015, USDA officials stated that they had reconsidered the significance of cyber risk and the role of cybersecurity in the sector and that it would be reflected in the yet-to-be-released 2015 sector-specific plan. In addition, according to USDA officials, they had completed a sector risk assessment effort with assistance from DHS. The Department of Health and Human Services (HHS), in collaboration with its sector partners, identified cyber risk as significant to the health care and public health sector. Specifically, the 2010 sector-specific plan identified cybersecurity and mitigating risks to the sector’s cyber assets as one of four service continuity goals for the sector. The plan’s cybersecurity risk assessment section identified and categorized common cyber threats, vulnerabilities, consequences, and mitigation strategies for the sector. Also, HHS and its partners added cyber infrastructure protection as a research and development priority in the sector-specific plan. In addition, health care entities, such as health plans and providers that maintain health data, must assess risks to cyber-based systems based on Health Insurance Portability and Accountability Act of 1996 security requirements. DHS, in collaboration with information technology sector stakeholders, identified cyber risk as a sector priority. DHS and its sector partners determined that the consequences of cyber incidents or events would be of great concern and would affect the sector’s ability to produce or provide critical products and services. DHS worked with public and private information technology stakeholders to complete the Information Technology Sector Baseline Risk Assessment in 2009. The risk assessment focused on risks to the processes involved in the creation of IT products and services and critical IT functions including research and development, manufacturing, distribution, upgrades, and maintenance— and not on specific organizations or assets. DHS and its nuclear sector stakeholders prioritized cyber risk as a significant risk for the nuclear sector. According to the 2011 Roadmap to Enhance Cyber Systems Security in the Nuclear Sector, they determined that the cyber systems supporting the nuclear sector are at risk due to the increasing volume, complexity, speed, and connectedness of the nuclear sector’s systems. Therefore, DHS and its sector partners included protecting against the exploitation of the sector’s cyber assets, systems, and networks among its sector goals and objectives for a comprehensive protective posture. Addressing cyber risk is a significant priority for the transportation systems sector. In the 2010 transportation systems sector-specific plan, DHS’s Transportation Security Administration (TSA) and U.S. Coast Guard acknowledged the importance of cyber assets to the sector’s operations across the various transportation modes and included an overview of the risk management framework, an all-hazards approach to be applied to the physical, human, and cyber components of the infrastructure. They also established goals and objectives to shape their sector partners’ approach for managing sector risk. As part of their objective to enhance the all-hazard preparedness and resilience of the transportation systems sector, they included the need to identify critical cyber assets, systems, and networks and implement measures to address strategic cybersecurity priorities. For fiscal year 2014, TSA assessed risks to the transportation systems sector and reported the outcome to Congress. Although the assessment did not specifically quantify cyber risks for the sector, it considered cyber threats to transportation modes in hypothetical scenarios, such as the effect of a cyber attack disabling a public transit system. In addition, TSA’s Office of Intelligence and Analysis provides transportation mode- specific annual threat assessments that include malicious cyber activity as part of the analysis. For example, the pipeline modal threat assessment considered computer network attacks that could disrupt pipeline functions and computer network exploitations that could allow unauthorized network access and theft of information. In addition, we have previously reported that the Coast Guard needs to address cybersecurity in the maritime port environment by, among other things, including cyber risks in its biennial maritime risk assessment. Subsequently, the Coast Guard released its updated risk assessment for maritime operations, which identified the need to address cyber risk but did not identify vulnerabilities in relevant cyber assets. The Environmental Protection Agency (EPA), in collaboration with sector partners, determined that a cyber attack is a significant risk to the water sector. Cyber attacks on the industrial control systems are among the plausible hazards that threaten the water and wastewater systems sector, according to the risk assessment portion of the 2010 sector-specific plan. EPA concluded that attacks on the systems used to monitor and control water movement and treatment could disrupt operations at water and wastewater facilities, although the capability to employ manual overrides for critical systems could reduce the consequences of an attack. EPA recommended that water sector facilities regularly update or conduct an all-hazards risk assessment that includes cyber attacks as a priority threat. Further, the Roadmap to a Secure and Resilient Water Sector, developed in 2013 by EPA, DHS, and water sector partners, included advancing the development of sector-specific cybersecurity resources as a top priority for the sector. Sector-specific agencies generally took actions to mitigate cyber risks and vulnerabilities for their respective sectors that address the Call to Action steps in the National Infrastructure Protection Plan. While the steps are not required of the SSAs, they are intended to guide national progress while allowing for differing priorities in different sectors. The SSAs had taken action to address most of the nine NIPP Call to Action steps. While SSAs for 12 of the 15 sectors had not identified incentives to promote cybersecurity in their sectors, as called for by one of the Call to Action steps, all the SSAs have participated in a working group to identify appropriate incentives to encourage cybersecurity improvements across their respective sectors. In addition, SSAs for 3 of 15 sectors had not yet made significant progress in advancing cyber-based research and development within their sectors because it had not been an area of focus for their sector. DHS guidance for updating the sector-specific plans directs the SSAs to incorporate the NIPP’s actions to guide their cyber risk mitigation activities including cybersecurity-related actions to identify incentives and promote research and development. Figure 1 depicts NIPP Call to Action steps addressed by SSAs. (App. II provides further details on actions taken to address the Call to Action steps for each sector.) DHS implemented activities to mitigate the cyber risks for the chemical sector for eight of nine of the NIPP’s Call to Action steps; however, it had not established incentives to encourage its sector partners to voluntarily invest in cybersecurity-enhancing measures. DHS has developed technical resources, cybersecurity awareness tools, and information- sharing mechanisms among its activities to enhance the sector’s cybersecurity. DHS officials described other cybersecurity activities in development including updates to sector cybersecurity guidance that could include incentives; however, they were unable to identify specific incentives to encourage cybersecurity across the sector. DHS conducted cyber mitigation activities that aligned with eight of the nine NIPP Call to Action steps for the commercial facilities sector. DHS provided technical assistance and supported information-sharing efforts for the sector. For example, it developed a risk self-assessment tool in conjunction with sector partners to raise awareness of the importance of their cyber systems. DHS also promoted a number of information-sharing mechanisms available through its Office of Cybersecurity and Communications, including the dissemination of alerts through the U.S. Computer Emergency Readiness Team (US-CERT), ICS-CERT, and the Commercial Facilities Cyber Working Group, among others. However, DHS did not identify efforts to establish incentives to encourage commercial facilities sector partners to implement cybersecurity- enhancing measures. DHS worked to reduce risk to the communications sector through collaborative cyber risk mitigation activities that align with eight of nine NIPP Call to Action steps. However, DHS did not establish incentives to promote cybersecurity for the sector. As previously stated, DHS and its communications sector partners completed the 2012 National Sector Risk Assessment for Communications, which examined risks from cyber incidents or events that threaten the sector’s cyber assets, systems, and networks. According to DHS officials, it coordinated mitigation activities with its communications sector partners and addressed risks identified through the assessment process. In addition, officials explained that it implemented or facilitated sector-wide information-sharing mechanisms with such entities as the National Cybersecurity and Communications Integration Center, National Infrastructure Coordinating Center, and National Coordinating Center for Telecommunications and Communications Information Sharing and Analysis Center. Although DHS had not implemented specific cyber-related incentives for the communications sector, DHS officials stated that National Security staff and the Office of Policy have been working on possible national incentives such as tax credits for future use. DHS focused cyber risk mitigation activities in seven of nine NIPP Call to Action steps for the critical manufacturing sector. However, cyber risk mitigation activities did not include efforts to incentivize cybersecurity or support cybersecurity-related research and development. Among its cyber risk mitigation activities, DHS participated in information sharing efforts through the sector coordinating council to enhance situational awareness; and led outreach efforts to encourage diverse (i.e., small, medium, and large companies) participation in the council as an activity to build national capacity. Although specific incentives to encourage cybersecurity across the sector had not been put in place, DHS officials stated that they had been involved in a working group to study possible options such as cyber insurance. While the critical manufacturing sector-specific plan and associated annual report of sector activities indicated that goals and needs regarding sector research and development are areas for future development, DHS did not provide any examples of specific research and development activities addressing the sector’s cybersecurity. DHS developed cyber risk mitigation activities for the dams sector focused on eight of nine NIPP Call to Action steps. However, DHS did not identify activities leveraging incentives to advance security and resilience. DHS officials stated that their efforts had not focused on incentives. Among its cyber risk mitigation activities, DHS officials facilitated the development of the Dams Sector Roadmap to Secure Control Systems, developed in 2010, which focuses on the cybersecurity of industrial control systems where cyber risks maybe more significant for individual entities. DHS also supported information-sharing mechanisms by promoting sector-wide information sharing and organized a cybersecurity working group to discuss cyber-relevant topics during quarterly meetings. Further, the department disseminated cyber vulnerability information to sector partners through advisories and alerts from DHS’s ICS-CERT and US-CERT. DOD devised cyber risk mitigation activities that align with eight of nine NIPP Call to Action steps but had not established incentives to promote cybersecurity. Cyber risk mitigation activities included sharing threat information and mitigation strategies for enhanced situational awareness and participating in DOD-centric exercises, among others. Although DOD did not identify specific incentives to encourage cybersecurity in the defense industrial base sector, DOD officials stated that they joined an interagency effort to explore various incentives that might be offered to industry to encourage use of the NIST Cybersecurity Framework. In addition, DOD officials noted that they have worked with the General Services Administration to develop strategic guidelines to incorporate cybersecurity standards in requirements for DOD contractors; however, this effort would not be part of DOD’s voluntary sector cybersecurity program. DHS established or facilitated cyber risk mitigation activities for eight of nine NIPP Call to Action steps; however, it had not instituted cybersecurity incentives. DHS officials stated that grants to state and local governments as incentives to encourage cybersecurity were not available, and no other types of incentives were identified. Among its activities, the department collaborated with emergency services sector partners in March 2014 to develop the Emergency Services Sector Roadmap to Secure Voice and Data Systems, which identified and discussed proposed risk mitigation activities and included justification for the response, sector context, barriers to implementation, and suggestions for implementation. DHS officials also noted various information-sharing mechanisms that disseminate cyber threat and vulnerability information to sector partners and allow reporting back to DHS. DOE instituted or supported cyber risk mitigation activities that correspond to all nine of the NIPP Call to Action steps. For example, DOE provided grants to share the costs of sector partners’ cybersecurity innovation efforts as an incentive for advancing cybersecurity and to support research and development of solutions to improve critical infrastructure security and resilience. Other activities to encourage cybersecurity in the sector included the development of cybersecurity guidance to promote the use of NIST’s Cybersecurity Framework and establishing or supporting cyber threat information sharing mechanisms. DOE also developed and implemented the Cybersecurity Risk Information Sharing Program, a public-private partnership to facilitate the timely sharing of cyber threat information and develop situational awareness tools to enhance electric utility companies’ ability to identify, prioritize, and coordinate the protection of their critical infrastructure. The Department of the Treasury implemented or facilitated activities that served to mitigate cyber risk for the financial services sector. These activities correspond to eight of the nine NIPP Call to Action steps. However, Treasury had not developed incentives to encourage cybersecurity in the sector through its voluntary critical infrastructure protection program. Treasury officials noted that they foresee developing incentives as a result of a report to the President pursuant to an Executive Order 13636 requirement that outlined an approach for policymakers to evaluate the benefits and relative effectiveness of government incentives in promoting adoption of NIST’s Cybersecurity Framework. Using the results of the updated sector planning process to inform its efforts could assist Treasury in developing any such incentives, as appropriate. We have previously reported on additional efforts to address cyber risk in this sector. In July 2015, we reported on cyber attacks against depository institutions, banking regulators’ oversight of cyber risk mitigation activities, and the process for sharing cyber threat information. Specifically, we found that smaller depository institutions were greater targets for cyber attacks. Also, we noted that although financial regulators devoted considerable resources to overseeing information security at larger institutions, their limited IT staff resources generally meant that examiners with little or no IT expertise were performing IT examinations at smaller institutions. As a result, we recommended that these regulators collect and analyze additional trend information that could further increase their ability to identify patterns in problems across institutions and better target their reviews. Finally, with cyber threat information coming from multiple sources, including from Treasury and other federal entities, recipients contacted in the review found federal information repetitive, not always timely, and not always readily usable. To help address these needs, Treasury had various efforts under way to obtain such information and confidentially share it with other institutions, including participating in groups that monitor and provide threat information on cyber incidents. USDA and FDA, as co-SSAs for the food and agriculture sector, had cyber risk mitigation activities addressing six of the nine NIPP Call to Action steps. For example, the SSAs had encouraged sector-wide participation in DHS’s program to promote NIST’s Cybersecurity Framework, participated in the process to identify any cyber-dependent critical functions and services, and supported threat briefings to enhance situational awareness across the sector. According to food and agriculture SSA officials, they had other activities in progress including facilitated sessions with their sector stakeholders as part of assessing risks to the sector and considering the development of food and agriculture sector-specific NIST Cybersecurity Framework implementation guidance to make the framework more relatable to food and agriculture stakeholders. However, other areas, including incentives to promote cybersecurity, research and development of security and resilience solutions, and lessons learned from exercises and incidents, have yet to be developed. As stated earlier, during the 2010 sector-specific planning process, cybersecurity risk was not considered significant for the sector, but USDA and FDA officials stated that they had incorporated cyber risk into their updated sector-specific plan and they continue to develop cybersecurity- related activities for the sector. HHS developed or supported activities addressing eight of the nine NIPP Call to Action steps. For example, HHS leveraged the private sector clearance program and access to classified information as incentives for sector stakeholders to participate in cybersecurity-enhancing activities. However, HHS had not performed any activities related to cybersecurity research and development. HHS officials stated that promoting research and development efforts to enhance the sector’s cybersecurity was not a focus of their cyber risk mitigation activities during fiscal years 2014 and 2015. DHS, in collaboration with its information technology sector partners, implemented risk mitigation activities to enhance the sector’s cybersecurity environment. We identified activities that addressed eight of nine NIPP Call to Action steps. DHS’s IT sector cyber risk mitigation activities included the promotion of incident response and recovery capabilities, support for various cyber-related information sharing mechanisms, and capabilities for technical assistance to sector entities. However, DHS had not specifically identified and analyzed incentives to improve cybersecurity within the IT sector. DHS officials stated that they have collaborated with other federal agencies to develop options for cybersecurity enhancement incentives for the sector. DHS carried out risk mitigation activities that addressed eight of the nine NIPP Call to Action steps. These activities included collaborative efforts through established working groups and councils to share information about cybersecurity-related alerts, advisories, and strategies. DHS officials responsible for nuclear SSA efforts referred to the Roadmap to Enhance Cyber Systems Security in the Nuclear Sector as guidance they developed in June 2011 and disseminated to sector partners for determining cyber risk and a vision for mitigating it over a 15-year period. However, DHS’s cyber risk mitigation activities did not include incentives for nuclear sector partners to enhance cybersecurity. The Department of Transportation and DHS’s TSA and U.S. Coast Guard put in place cyber risk mitigation activities in line with all nine NIPP Call to Action steps. For example, TSA shared cyber threat intelligence and information from the National Cybersecurity and Communications Integration Center to multiple transportation modes through its threat dissemination channels. In addition, classified information had been “tearlined” or downgraded based on a request from TSA so that information could be shared without sharing sensitive and restricted information to sector officials without security clearances. Further, the U.S. Coast Guard used its Port Security Grant Program as an incentive for cybersecurity efforts through its Port Security Grants Program for the maritime subsector. This DHS grants program provides funding for maritime transportation security measures including cybersecurity. However, as we have previously reported, this program did not always make use of cybersecurity-related expertise and other information in allocating grants. Accordingly, we recommended that the program take steps to make better-informed funding decisions. In addition, TSA officials stated that they have participated in working groups to identify other cybersecurity-related incentives across the various transportation modes. EPA incorporated cyber risk mitigation activities that aligned with eight of the nine NIPP Call to Action steps. Specifically, EPA had not established incentives to encourage sector partners to enhance their security and resiliency. EPA officials stated providing funds to support cybersecurity enhancements would be an incentive for their sector partners; however, they lacked the resources to offer grants to implement security measures. EPA officials also stated that they are working on implementing recommendations from Critical Infrastructure Partnership Advisory Council Water Sector Cybersecurity Strategy Workgroup which include exploring ways to demonstrate how the benefits of implementing cybersecurity enhancements outweigh the costs of cyber incidents as an incentive to encourage investment in cybersecurity improvements. Sector-specific agencies use various collaborative mechanisms to share cybersecurity related information across all of the sectors. Presidential Policy Directive 21 (PPD-21) states that sector-specific agencies are to coordinate with DHS and other relevant federal departments and agencies and collaborate with critical infrastructure owners and operators to strengthen the security and resiliency of the nation’s critical infrastructure. SSAs share information and collaborate across sectors primarily through a number of councils, working groups, and information-sharing centers established by federal entities. The mechanisms identified during our review for SSAs to collaborate across the sectors are summarized, along with the number of sectors represented in each council or group by their respective SSA, in table 6. The mechanisms provide SSAs opportunities to interact, collaborate, and coordinate with one another. For example, each of the sectors we reviewed used working groups created under the Critical Infrastructure Partnership Advisory Council. According to the CIPAC 2013 annual report, in 2012 there were 60 working groups that held approximately 200 meetings with objectives such as information sharing, training and exercises, and risk management. In addition, SSAs used their respective government coordinating councils to coordinate with other SSAs about interdependencies and to gain access to needed expertise about the operations of other sectors. For example, DHS officials stated that the communications sector’s government coordinating council membership provides the expertise necessary to fulfill the council’s mission. They stated that its current membership includes representatives from the DOD, DOE and Treasury, among others, and from multiple DHS components. Further, SSAs continually referred to the Cross-Sector Cyber Security Working Group and the Industrial Control System Joint Working Group as two of the main cybersecurity-related collaborative opportunities for federal agencies. Both of these working groups facilitate government sharing of information among officials representing different sectors. The Cross-Sector Cyber Security Working Group operates under DHS’s Office of Cybersecurity and Communications. It provides the SSAs the opportunity to establish and maintain cross-sector partnerships; work on cross-cutting issues, such as incentives to encourage cybersecurity actions; and identify cyber dependencies and interdependencies that allow them to share information on cybersecurity trends that can affect their respective sectors. According to DHS, more than 100 members attend monthly meetings to share information and activities about their respective sectors. Of the SSAs representing the 15 sectors we reviewed, SSAs for 14 sectors indicated in their documentation or statements that they were active participants in this working group. The Industrial Control System Joint Working Group was established by DHS’s Industrial Control Systems Cyber Emergency Response Team to facilitate information sharing and reduce the risk to the nation’s industrial control systems. According to DHS, the goal of this working group is to continue and enhance the collaborative efforts of the industrial control systems stakeholder community by accelerating the design, development, and deployment of secure industrial control systems. SSAs for 12 of the 15 sectors within the scope of our review were active participants in the working group. For example, HHS officials stated that they attend the Industrial Control System Joint Working Group meetings as a way to analyze relationships and identify overlapping actions with other sectors. Table 7 provides examples of cross-sector collaboration in relation to the sectors. In addition to the mechanisms identified above, further collaboration occurred through the co-location of sectors’ SSAs within one department. DHS, as the SSA for eight critical infrastructure sectors, has six of the sectors assigned to officials under the Infrastructure Protection group, and two under the Cybersecurity and Communications group. DHS’s Office of Infrastructure Protection officials representing several SSAs stated that they leverage DHS’s Office of Cybersecurity and Communications capabilities and resources for their sectors. Further, housing these responsibilities within the same organization provided efficiencies for their respective critical infrastructure sectors. For example, according to documentation for the critical manufacturing sector SSA, officials are leveraging training curricula produced by other Office of Infrastructure Protection SSA officials. Additionally, DHS had co-located both the National Cybersecurity and Communications Integration Center and National Infrastructure Coordinating Center, which brings two 24x7 watch centers together as they share physical and cyber information related to critical infrastructure. Finally, SSAs used the Homeland Information Sharing Network (HSIN) sector pages to collaborate across sectors. The HSIN is a network for homeland security mission operations to share sensitive but unclassified information, including with the critical infrastructure community. It is to provide real-time collaboration tools including a virtual meeting space, document sharing, alerts, and instant messaging. Officials from SSAs associated with 14 of the 15 sectors stated that they used HSIN to share information with stakeholders within their respective sectors. For example, within the dams HSIN portal, the sector implemented a Suspicious Activity Report online tool to provide users with the capability to report and retrieve information pertaining to suspicious activities that could compromise the facility or system in a manner that would cause an incident jeopardizing life or property. Additionally, officials from the chemical sector stated that they use HSIN for the coordination of cybersecurity incidents within the sector and officials from the critical manufacturing SSA stated that when entities from their sector reach out to them for more information on threats or alerts, they direct them to subscribe to the critical manufacturing HSIN page. The NIPP includes guidance to SSAs to focus on the outcomes of their security and resilience activities. Specifically, as noted earlier, one of the NIPP Call to Action steps directs SSAs and their sector partners to identify high-level outcomes to facilitate evaluation of progress toward national goals and priorities, including securing critical infrastructure against cyber threats. In addition, the NIPP risk management framework, used as a basis for the sector-specific plans, includes measuring the effectiveness of the SSAs’ risk mitigation activities as a method of monitoring sector progress. Among the SSAs, DOD, DOE, and HHS had established performance metrics to monitor cybersecurity-related activities, incidents, and progress in their sectors. DOD monitored cybersecurity for the defense industrial base sector through reports of cyber incidents and cyber incidents that were blocked; reports from owners and operators regarding efforts to execute the sector-specific plan’s implementation actions; and the number of cyber threat products disseminated by DOD to cleared companies and the timeliness of shared threat information. DOD also prepared annual reports for Congress for fiscal years 2010 through 2014 that provided information on sector performance metrics. DOE developed the ieRoadmap, an interactive tool designed to enable energy sector stakeholders to map their energy delivery system cybersecurity efforts to specific milestones identified in the Roadmap to Achieve Energy Delivery Systems Cybersecurity. DOE also established the Cybersecurity Capability Maturity Model program to support ongoing development and measurement of cybersecurity capabilities. The voluntary program provides a mechanism for measuring cybersecurity capabilities from a management and program perspective. HHS monitored cybersecurity metrics such as the number of subscribers to receive its security alerts and incidents of health information security breaches. The Health Information Technology for Economic and Clinical Health (HITECH) Act requires that health care data breaches be reported to the affected individuals and HHS, compiled in an annual HHS report to Congress, and for breaches affecting 500 or more individuals, shared with the media. HHS officials stated that they use the information on data breaches as an indicator of cybersecurity-related trends for the sector. However, SSAs for the other 12 sectors had not developed or reported performance metrics, although some had efforts under way to do so. For selected sectors, including financial services and water and wastewater systems, SSAs emphasized that they rely on their private sector partners to voluntarily share information and so are challenged in gathering the information needed to measure efforts. Sector stakeholders are not necessarily willing to openly share potentially sensitive cybersecurity- related information. Also, the DHS guidance to the SSAs for updating their sector-specific plans includes directions to create new metrics to evaluate the sectors’ security and resilience progress; however, the plans have not been finalized and released. DHS had not developed performance metrics to monitor the cybersecurity progress for its 8 sectors, although according to agency officials, such efforts are under way. For example, DHS lacked metrics for the chemical sector; however, officials stated that multiple industry working groups were working on cyber performance metrics to measure progress at a very high level. In addition, in 2011, a nuclear cybersecurity roadmap document was released that outlined milestones and specific cybersecurity goals for the sector over a 15-year period, including the need for metrics to measure and assess the sector’s cybersecurity posture. The nuclear sector roadmap provides near-, mid-, and long-term goals but not specific measures or criteria to assess the sector’s cybersecurity posture. Further, according to DHS officials, a number of initiatives were begun to gather performance-related information, including the following: DHS’s Programmatic Planning and Metrics Initiative was established in October 2014 to gather data from the department’s sectors and monitor their cybersecurity process. However, as of the time of our review, the initiative had only limited historical data. DHS’s Sector Outreach and Programs Division plans to implement program metrics to measure and analyze adoption of cybersecurity practices and NIST’s Cybersecurity Framework across the sectors. DHS officials for the information technology and communications sectors stated that they had proposed performance metrics to be implemented through 2018. In a review of cybersecurity related to the nation’s communications networks, we reported that DHS and its partners had not developed outcome-based metrics related to the cyber-protection activities for the communications sector. We recommended that DHS and its sector partners develop, implement, and track sector outcome-oriented performance measures for cyber protection activities related to the nation’s communications networks. Regarding the financial services sector, Treasury officials stated that the department does not have performance metrics to chart the sector’s cybersecurity-related progress. However, according to Treasury officials, the sector coordinating council is working with the Financial and Banking Information Infrastructure Committee to identify metrics to evaluate progress in the sector. According to the officials, identifying actionable metrics based on cyber risk mitigation programs is a challenge. Treasury officials emphasized that the information needed is privately owned and may or may not be voluntarily shared with government partners. The food and agriculture 2010 sector-specific plan stated that the sector did not have metrics to measure the effectiveness of risk mitigation efforts, although it acknowledged the need to establish tracking and monitoring mechanisms. The plan also noted that sector partners, including state agencies and private industry, may view reporting programmatic data as a burden and question the security of the data once reported. In December 2014, USDA officials noted that they do not have formal mechanisms to measure sector progress, although survey results collected through food safety inspection activities have some security elements. The ongoing process to update the sector-specific plan provides USDA and HHS an opportunity to consider possible performance metrics for monitoring the sector’s cybersecurity progress. The transportations systems sector SSAs had also not instituted mechanisms to evaluate the progress of sector entities in achieving a more secure sector. For example, TSA officials stated that they are developing cyber metrics in line with the 2014 Sector-Specific Plan Guidance; however, the officials noted that their industry partners are reluctant to share information needed to monitor improvement in the sector because they fear regulation. Finally, EPA does not collect performance information to provide metrics on the effectiveness of its cybersecurity programs for the water sector. Agency officials noted that the lack of statutory authority is a major challenge to collecting performance metrics data. In the absence of statutory authority or agency policy, EPA must work with water sector associations to collect the information across the sector. However, water utilities may be reluctant to voluntarily report security information to EPA. EPA is also working with the Water Sector Coordinating Council to identify performance metrics for implementation of NIST’s Cybersecurity Framework in the water sector, according to agency officials. Until SSAs develop performance metrics and collect data to report on the progress of the sector-specific agencies’ efforts to enhance the sectors’ cybersecurity posture, they may be unable to adequately monitor the effectiveness of their cyber risk mitigation activities and document the resulting sector-wide cybersecurity progress. Overall, SSAs are acting to address sector cyber risk, but additional monitoring actions could enhance their respective sectors’ cybersecurity posture. Most SSAs had identified the significance of cyber risk to their respective sectors as part of the 2010 sector-specific planning process with four sectors concluding that cyber risk was not significant at that time, but subsequently reconsidering the significance of cyber risks to their sectors. However, to prepare the 2015 updates to their sector- specific plans, the planning guidance directed the SSAs to address their current and emerging sector risks including the cyber risk landscape and key trends shaping their approach to managing risk. Toward this end, all of the SSAs had generally performed cyber risk mitigation activities that address the NIPP’s Call to Actions steps and regarding incentives— one area not addressed by most of the SSAS— efforts had begun to determine appropriate ways to encourage additional cybersecurity-related efforts across the nation’s critical infrastructures. To their credit, SSAs are engaged in multiple public-private and cross sector collaboration mechanisms that facilitate the sharing of information, including cybersecurity-related information. However, most SSAs have not developed metrics to measure and improve the effectiveness of all their cyber risk mitigation activities and their sectors’ cybersecurity posture. As a result, SSAs may not be able to adequately monitor and document the benefits of their activities in improving the sectors’ cybersecurity posture or determine how those efforts could be improved. To better monitor and provide a basis for improving the effectiveness of cybersecurity risk mitigation activities, we recommend that, informed by the sectors’ updated plans and in collaboration with sector stakeholders, the Secretary of Homeland Security direct responsible officials to develop performance metrics to provide data and determine how to overcome challenges to monitoring the chemical, commercial facilities, communications, critical manufacturing, dams, emergency services, information technology, and nuclear sectors’ cybersecurity progress; Secretary of the Treasury direct responsible officials to develop performance metrics to provide data and determine how to overcome challenges to monitoring the financial services sector’s cybersecurity progress; Secretaries of Agriculture and Health and Human Services (as co- SSAs) direct responsible officials to develop performance metrics to provide data and determine how to overcome challenges to monitoring the food and agriculture sector’s cybersecurity progress; Secretaries of Homeland Security and Transportation (as co-SSAs) direct responsible officials to develop performance metrics to provide data and determine how to overcome challenges to monitoring the transportation systems sector’s cybersecurity progress; and Administrator of the Environmental Protection Agency direct responsible officials to develop performance metrics to provide data and determine how to overcome challenges to monitoring the water and wastewater systems sector’s cybersecurity progress. We provided a draft of this report to the Departments of Agriculture, Defense, Energy, Health and Human Services, Homeland Security, Transportation, and the Treasury and to EPA. In written comments signed by the Director, Departmental GAO-OIG Liaison Office (reprinted in app. III), DHS concurred with our two recommendations. DHS also provided details about efforts to address cybersecurity in the sectors for which DHS has responsibility as the SSA. DHS also stated that it supports the intent of the recommendation to improve cybersecurity, including efforts to develop performance metrics. Further, in regard to the transportation sector specifically, DHS stated that the Transportation Security Administration and the United States Coast Guard would work in collaboration with the Department of Transportation to ensure that cybersecurity is at the forefront of their voluntary partnership. In written comments signed by the Department of the Treasury’s Acting Assistant Secretary for Financial Institutions (reprinted in app. IV), the department stated that monitoring the sector’s cybersecurity progress is a critical component of the sector’s efforts to reduce cybersecurity risk and discussed efforts with the department’s partners to improve the sector’s ability to assess progress and develop metrics. In written comments signed by EPA’s Deputy Assistant Administrator (reprinted in app. V), EPA generally agreed with our recommendation and discussed efforts to develop cybersecurity performance metrics for the water and wastewater systems sector. The Department of Transportation’s Director of Program Management and Improvement stated in an e-mail that the department concurred with our findings and our recommendation directed to the Secretary of Transportation and stated that it would continue to work with DHS to improve cyber risk mitigation activities and strengthen the transportation sector’s cybersecurity posture. If effectively implemented, the actions identified by these departments should help address the need to better measure cybersecurity progress in the sectors. The Departments of Agriculture and Health and Human Services did not comment on the recommendations made to them. In addition, officials from the Departments of Agriculture, Defense, Energy, Health and Human Services, Homeland Security, and the Treasury and EPA also provided technical comments via e-mail that have been addressed in this report as appropriate. The Department of Transportation did not have technical comments for the report. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Agriculture, Defense, Energy, Health and Human Services, Homeland Security, Transportation, and the Treasury; the Administrator of the Environmental Protection Agency; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6244 or wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our objectives were to determine the extent to which sector-specific agencies (SSA) have (1) identified the significance of cyber risks to their respective sectors’ networks and industrial control systems, (2) taken actions to mitigate cyber risks within their respective sectors, (3) collaborated across sectors to improve cybersecurity, and (4) established performance metrics to monitor improvements in their respective sectors. To conduct our evaluation, we analyzed relevant critical infrastructure protection policies and guidance for improving the cybersecurity posture of the nation’s critical infrastructure. Based on these analyses, we identified nine federal agencies designated as the sector-specific agencies for the critical infrastructure sectors. For this review, we focused on eight of the nine sector-specific agencies responsible for 15 of the 16 critical infrastructure sectors. We included the 15 sectors that involve private sector stakeholders in their efforts to implement activities to address sector security and resiliency goals. We excluded the General Services Administration, the sector-specific agency for the government facilities sector, as the sector is uniquely governmental with facilities either owned or leased by government entities. See Table 8 for the sectors and sector-specific agencies included in our review. To determine how sector-specific agencies prioritized cyber risks, we analyzed their efforts to identify and document cyber risks. We reviewed the risk assessment methodologies employed as documented in the 2010 sector-specific plans and other supplementary documentation such as formal risk assessments, strategy documents, and annual reports. We also interviewed officials responsible for carrying out the sector-specific agency roles and responsibilities to further understand their determination of the significance of cyber-related risks to their respective sectors. To identify SSAs’ activities to mitigate cyber risks, we compared sector- specific planning documents and actions to fulfill roles and responsibilities as identified in federal policy and the 2013 National Infrastructure Protection Plan (NIPP) Call to Action steps related to cyber risks. The NIPP steps are suggested practices to guide sector-specific agencies’ actions. The NIPP presented a total of 12 steps; however, we excluded 2 steps that we determined did not have a cybersecurity-related nexus. We analyzed the latest sector-specific plans, which were released in 2010, and other sector-specific planning documents including risk assessments and strategies for each of the sectors. We also interviewed officials from the SSAs and obtained related documentation to identify cyber risk mitigation activities. Additionally, we interviewed private sector stakeholders representing the sector coordinating councils to corroborate the sector-specific agencies cyber risk mitigation activities. We used all of this information to determine the extent to which each of the sector- specific agencies conducted activities for the 9 of the NIPP Call to Action steps. To determine the extent of the sector-specific agencies’ collaborative efforts to enhance their sectors’ cybersecurity environment, we reviewed documentation related to the collaboration mechanisms utilized by the sector-specific agencies. We also identified the collaborative groups, councils, and working groups that were utilized most frequently by SSAs to share cybersecurity-related information across the sectors. We analyzed documentation of cross-sector collaboration from the sector, government, and cross-sector coordinating councils. Additionally, we interviewed SSA officials and private sector stakeholders representing the sector coordinating councils. To identify performance measures used by SSAs to monitor cybersecurity in their respective sectors, we analyzed the sector-specific plans and cybersecurity-related performance reporting documents and interviewed SSA officials. We reviewed performance evaluation guidance related to national security and resiliency goals provided to the SSAs for past and future planning efforts. Additionally, we reviewed past sector annual reports, which tracked actions of the sector against goals established in the 2010 sector-specific plans, as well as strategic documents or roadmaps used to track sector performance. We reviewed reports of cyber incidents and data breaches provided as examples of indicators for SSAs to monitor sector cybersecurity. We also interviewed private sector partners to identify sources of cybersecurity-related data being reported to the sector-specific agencies. We conducted this performance audit from June 2014 to November 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides further details on cyber risk mitigation activities sector-specific agencies (SSA) developed for the 15 sectors in our review based on analysis of documentation and statements from SSA officials. Tables 9 through 23 below show, for each sector, SSA actions that aligned with the 2013 National Infrastructure Protection Plan (NIPP) Call to Action Steps. In addition to the contact named above, Michael W. Gilmore, Assistant Director; Kenneth A. Johnson; Lee McCracken; David Plocher; Di’Mond Spencer; Jonathan Wall; and Jeffrey Woodward made key contributions to this report.
U. S. critical infrastructures, such as financial institutions, commercial buildings, and energy production and transmission facilities, are systems and assets, whether physical or virtual, vital to the nation's security, economy, and public health and safety. To secure these systems and assets, federal policy and the NIPP establish responsibilities for federal agencies designated as SSAs, including leading, facilitating, or supporting the security and resilience programs and associated activities of their designated critical infrastructure sectors. GAO's objectives were to determine the extent to which SSAs have (1) identified the significance of cyber risks to their respective sectors' networks and industrial control systems, (2) taken actions to mitigate cyber risks within their respective sectors, (3) collaborated across sectors to improve cybersecurity, and (4) established performance metrics to monitor improvements in their respective sectors. To conduct the review, GAO analyzed policy, plans, and other documentation and interviewed public and private sector officials for 8 of 9 SSAs with responsibility for 15 of 16 sectors. Sector-specific agencies (SSA) determined the significance of cyber risk to networks and industrial control systems for all 15 of the sectors in the scope of GAO's review. Specifically, they determined that cyber risk was significant for 11 of 15 sectors. Although the SSAs for the remaining four sectors had not determined cyber risks to be significant during their 2010 sector-specific planning process, they subsequently reconsidered the significance of cyber risks to the sector. For example, commercial facilities sector–specific agency officials stated that they recognized cyber risk as a high-priority concern for the sector as part of the updated sector planning process. SSAs and their sector partners are to include an overview of current and emerging cyber risks in their updated sector-specific plans for 2015. SSAs generally took actions to mitigate cyber risks and vulnerabilities for their respective sectors. SSAs developed, implemented, or supported efforts to enhance cybersecurity and mitigate cyber risk with activities that aligned with a majority of actions called for by the National Infrastructure Protection Plan (NIPP). SSAs for 12 of the 15 sectors had not identified incentives to promote cybersecurity in their sectors as proposed in the NIPP; however, the SSAs are participating in a working group to identify appropriate incentives. In addition, SSAs for 3 of 15 sectors had not yet made significant progress in advancing cyber-based research and development within their sectors because it had not been an area of focus for their sector. Department of Homeland Security guidance for updating the sector-specific plans directs the SSAs to incorporate the NIPP's actions to guide their cyber risk mitigation activities, including cybersecurity-related actions to identify incentives and promote research and development. All SSAs that GAO reviewed used multiple public-private and cross-sector collaboration mechanisms to facilitate the sharing of cybersecurity-related information. For example, the SSAs used councils of federal and nonfederal stakeholders, including coordinating councils and cybersecurity and industrial control system working groups, to coordinate with each other. In addition, SSAs participated in the National Cybersecurity and Communications Integration Center, a national center at the Department of Homeland Security, to receive and disseminate cyber-related information for public and private sector partners. The Departments of Defense, Energy, and Health and Human Services established performance metrics for their three sectors. However, the SSAs for the other 12 sectors had not developed metrics to measure and report on the effectiveness of all of their cyber risk mitigation activities or their sectors' cybersecurity posture. This was because, among other reasons, the SSAs rely on their private sector partners to voluntarily share information needed to measure efforts. The NIPP directs SSAs and their sector partners to identify high-level outcomes to facilitate progress towards national goals and priorities. Until SSAs develop performance metrics and collect data to report on the progress of their efforts to enhance the sectors' cybersecurity posture, they may be unable to adequately monitor the effectiveness of their cyber risk mitigation activities and document the resulting sector-wide cybersecurity progress. GAO recommends that certain SSAs collaborate with sector partners to develop performance metrics and determine how to overcome challenges to reporting the results of their cyber risk mitigation activities. Four of these agencies concurred with GAO's recommendation, while two agencies did not comment on the recommendations.
USERRA applies to public and private employers in the United States, regardless of size, and includes federal, state, and local governments, as well as for-profit and not-for-profit private sector firms. In addition to the reemployment provisions, USERRA also prohibits discrimination in employment against individuals because of their service, obligation to perform service, or membership or application for membership in the uniformed services. Generally, servicemembers who were absent from their civilian job by reason of their service are entitled to the reemployment rights and benefits provided by USERRA if they provided their employer with advance notice of their service requirement when possible, served fewer than 5 years of cumulative uniformed service with respect to that employer, left service under honorable conditions, and reported back to work or applied for reemployment in a timely manner. Servicemembers who meet their USERRA requirements are entitled to prompt reinstatement to the positions they would have held if they had never left their employment or to positions of like seniority, status, and pay; continued health coverage for a designated period of time while absent from their employers and immediate reinstatement of health coverage upon return; training, as needed, to requalify for their jobs; periods of protection against discharge (without cause) based on the length of service; and nonseniority benefits that are available to other employees who are on leaves of absence. If a servicemember believes that his or her USERRA rights have been violated, the servicemember may seek formal assistance from federal agencies in resolving the complaint. Figure 1 below shows the formal USERRA complaint process using federal assistance and the deadlines imposed by VBIA 2008. In addition, VBIA 2008 requires DOJ, DOL, and OSC to submit quarterly reports to Congress on their compliance with the deadlines. The reports cover USERRA activities for the previous quarter and are due within 30 days of the end of that quarter. To implement VBIA 2008 reporting requirements, each of the agencies updated their existing databases used to both maintain data on USERRA cases and produce the required reports. DOL produces its quarterly USERRA reports by extracting data contained in its USERRA Information Management System, a Web-based system managed by VETS that includes critical events in the history of the case, case resolution, complainant and employer names, and dates. DOJ’s Employment Litigation Section of the Civil Rights Division, maintains data on the extent to which it meets VBIA 2008 deadlines in a WordPerfect log, which is an ancillary system to the Civil Rights Division’s primary data system. OSC uses OSC 2000, which was designed to capture and record data from the initial filing of a complaint until the closure and archiving of the case file and allows for queries that create a number of management and workload reports. Our analysis showed that in the 1,663 investigations included in our review, DOL generally met the original deadline or a new deadline agreed to by the servicemember. For investigations, DOL met the original 90-day deadline or an extended deadline in about 99 percent of cases. During the period covered by our review, DOL took on average about 52 days to complete an investigation. Figure 2 shows the extent to which DOL met initial and extended investigation deadlines. (1442) 1% (15) <1% (8) When DOL exceeded the deadline, it generally negotiated an extension with the servicemember to complete the investigation and met those extended deadlines. In the 213 cases where DOL asked for and received the complainant’s consent for an extension, DOL met the last extended deadline in approximately 93 percent (198) of the investigations. For cases that exceeded the deadline, the average processing time was approximately 138 days. The longest investigation took nearly a year (357 days) to complete. According to DOL, this case, for which the Pension Benefit Guaranty Corporation (PBGC) was the trustee, involved a servicemember who returned to employment after his pension plan had been terminated and was affected by a change in PBGC rules under the Employee Retirement Income Security Act (ERISA) of 1974. As of February 28, 2010, 68 cases subject to our review remained open (i.e., their investigation had not yet been completed). For investigations that remained open, 32 of 68 cases had been open for more than 90 days. The average age of those still open was approximately 104 days. As of February 28, 2010, the investigation that had been open the longest was 285 days. According to DOL, during the course of the 285 days, the investigation had been closed for 45 days due to the complainant’s lack of response to the investigator’s inquiries. After DOL reopened the investigation, the parties reached a settlement, but DOL kept the case open, in accordance with DOL policy, until all the terms of the settlement had been met. To assess the progress of investigations taking more than 90 days, VETS officials said that they produce a monthly management report, which helps them identify and eliminate any barriers to resolution. The report is also reviewed to identify any recurring issues that need to be resolved through revised procedures or enhanced training. When servicemembers requested a referral, DOL met either the original 60-day deadline to send the case to DOJ or OSC or an extended deadline in more than 99 percent of the 205 referrals in our review. During the period covered by our review, DOL took, on average, about 67 days to send the memorandum of referral to DOJ or OSC. Figure 3 shows the extent to which DOL met initial and extended referral deadlines. 64% (131) 36% (74) 36% (73) <1% (1) 0% (0) When DOL exceeded the deadline on referral cases, it generally negotiated an extension with the servicemember in order to finish processing the referral and send it to DOJ or OSC. Where DOL asked for and received the complainant’s consent for an extension of time, DOL met the last extended referral deadline in nearly all cases—73 of 74 cases. For the 74 cases that exceeded the deadline, the average processing time was about 113 days, with the longest referral taking 348 days to process. According to DOL, the complainant in this case had been injured in service and was not medically ready to return to work at the time of the referral. Once the complainant became medically stable, DOL could determine whether there was an appropriate reemployment position to which the complainant could return, and the case was ultimately resolved. As of February 28, 2010, 34 referral cases subject to our review remained open and were still being processed by DOL. Of the referrals still open as of February 28, 2010, 13 of 34 cases had been open for more than 60 days. The average age for those still open was 71 days. The referral open the longest had an age of 324 days. According to DOL, it was difficult to obtain the employer’s compliance with the terms of the settlement agreement, and DOL kept the case open until the employer complied. To assess the progress of referral processing, DOL also produces a monthly management report on referrals to ensure that established procedures are being followed and that, if the referral process will exceed 60 days, DOL will negotiate for and document an extension of time. To implement the VBIA 2008 requirement to notify servicemembers of their USERRA complaint process rights within 5 days of receiving a complaint, DOL created a standard notification letter that advises servicemembers of their right to request to have their case referred to DOJ or OSC for further review, or that the servicemember can file a complaint using private counsel. For complaints filed electronically, DOL updated its USERRA database to automatically generate the standard notification in an E-mail and send it directly to servicemembers. For complaints filed in hard copy, the assigned DOL employee is to send the servicemember a copy of the notification letter via E-mail or mail. To ensure that the notifications are sent to the servicemember, DOL requires the investigator to make a notation in the hard copy case file indicating that the notification was sent and on what date. However, DOL does not record this information in its USERRA database and does not track the extent to which it complies with the notification requirement. VBIA 2008 does not require DOL to report on the extent to which it complies with this notification requirement. We have previously reported on the importance of ensuring that servicemembers are appropriately notified of their rights. In 2007, we reported that DOL did not consistently notify complainants of their rights at the end of the investigation and recommended that DOL update its operations manual and augment its training. Since 2007, DOL has taken actions to improve its process for notifying servicemembers of their rights at the end of the investigation. However, because VBIA 2008 does not require DOL to report on the extent to which it meets the new requirement to notify servicemembers of their rights in writing within 5 days of receiving the complaint, and DOL does not maintain and monitor such data, Congress and DOL cannot be assured that servicemembers who file complaints are adequately being informed of their USERRA process rights in accordance with VBIA 2008. Although DOL does not maintain data in its USERRA database on notifications of USERRA complaint process rights, we were able to estimate, based on our review of a random sample of case files, the extent to which DOL notified servicemembers of their USERRA complaint process rights within 5 days. Specifically, we estimated that in about 85 percent of cases, DOL notified complainants of their rights within 5 days. In about 9 percent of the cases, we estimated that DOL notified complainants late. Of the complaints in our sample where DOL exceeded the 5-day deadline, DOL notified complainants of their rights within 12 days. In about 7 percent of the cases, DOL did not have evidence of notification of rights. In our sample, where we found no evidence of notification, servicemembers had filed their complaints in hard copy. About one-third of the cases in our sample were filed in hard copy. Moreover, where servicemembers filed complaints electronically, we found evidence in all cases that DOL notified the servicemembers of their complaint process rights. DOL is planning to implement a new process for handling hard copy complaints, which, according to DOL, would help to ensure that all servicemembers are notified of their rights in a timely manner. According to DOL, all hard copy filed complaints will be submitted first to the USERRA Regional Lead Center. The Lead Center will enter the hard copy complaints into the electronic complaint system, and the complaint will then be treated in the same way as if it had been filed electronically. This includes immediately notifying the complainants that the complaint has been received, providing them with appropriate VETS contact information, notifying complainants of their rights, assigning the case to the appropriate VETS office, and keeping records of all those actions. This new procedure requires a change in the complaint form, which is pending approval from the Office of Management and Budget. DOL officials plan to implement the new process as soon as the new complaint form is approved, which DOL officials expect will occur in the fall of 2010. Our analysis shows that in the 201 cases included in our review, DOJ met the original deadline or an extended deadline in about 96 percent of all cases. According to DOJ, complaints against state employers are not covered under the 60-day deadline. However, because DOJ maintains data on the extent to which it met the 60-day deadline in state employer cases and reports on these cases in its quarterly reports, we have included these cases in our analysis. During the period covered by our review, DOJ took on average about 35 days to make a decision on representation (or initiation of legal action) and to notify the complainant of its decision. Figure 4 shows the extent to which DOJ met initial and extended referral deadlines. For the 29 cases that exceeded the 60-day deadline, the average processing time was 101 days. The longest case took 342 days to reach a decision on representation. According to DOJ, the servicemember in this case was deployed overseas, and because DOJ wanted to conduct an in-person interview with him prior to making a decision on representation, DOJ obtained an extension until his return. Three other cases exceeded the deadline by 60 days or more. Of those cases, one involved a servicemember with an overseas deployment, another was delayed due to settlement negotiations, and the third required DOJ to collect additional information to make a decision on representation. Of those cases that exceeded the deadline, DOJ sought an extension in 21 of 29 cases. For cases where DOJ asked for and received the servicemember’s consent for an extension of time, DOJ met the last negotiated deadline in all of the cases. As of February 28, 2010, four cases included in our review remained open and were still being processed by DOJ. Three of these cases had been open for more than 60 days, with the longest open for 89 days. Our analysis showed that 6 of 12 cases against state employers took more than 60 days to process. Comparatively, 23 of 189 cases against private or local government employers exceeded the 60-day deadline. Therefore, servicemembers who are employed by state governments may not be receiving the same treatment as other servicemembers in terms of the timeliness of USERRA complaint processing. According to DOJ officials, the statutory deadline does not apply in cases against a state employer. Specifically, DOJ officials stated that the statutory deadline only applies where the Attorney General makes a decision whether to “appear on behalf of, and act as attorney for” the servicemember. This provision only applies to cases against private employers because in those cases, DOJ represents the servicemember. For cases against state employers, however, DOJ must bring cases on behalf of the United States as the plaintiff. Since in these instances DOJ “appears on behalf of and acts as attorney for” the United States—not the servicemember—the statutory deadline does not apply, according to DOJ. Nevertheless, DOJ maintains data on the extent to which it processes these cases within 60 days and includes information on these cases in the narrative section of its quarterly reports to Congress. DOJ similarly states that the statutory requirement to seek consent for an extension of the 60-day deadline does not apply to situations involving state employers since DOJ does not represent the individual servicemember, but is instead representing the interests of the United States as the plaintiff, or real party in interest. DOJ officials said that to require DOJ to seek such consent from a servicemember in situations involving state employers would create the appearance that the servicemember is the real party in interest and that DOJ is not representing the U.S. government, but the servicemember. According to DOJ, this could foster Eleventh Amendment challenges by states who would argue that it is the servicemember, not the United States, that is the plaintiff or real party in interest and that such a suit runs afoul of the Eleventh Amendment in the same way that a private suit has when brought by a servicemember against a state employer. Our analysis showed that in 6 of 13 private employer cases where the servicemember was involved in settlement negotiations and DOJ declined representation, DOJ notified the servicemember of its decision to decline representation but continued to aid the parties with facilitating a settlement. According to DOJ officials, once it has declined representation, DOJ no longer counts the time it spends working on the case in measuring compliance with the statutory time frame. Consequently, DOJ does not report this time following the decision on representation. DOJ officials said that for some cases they made the decision not to offer representation, but continued to aid parties in facilitating settlement because they thought it was in the best interest of the servicemember. VBIA 2008 requirements were enacted in part due to congressional recognition of servicemember concerns over the length of time it takes for USERRA complaints to be resolved. Because VBIA 2008 does not require the agencies to report time they spend on a case after declining representation, Congress is not getting a full picture of the effort that DOJ makes on behalf of servicemembers. Our analysis showed that in the 45 cases included in our review, OSC generally met the original deadline or an extended deadline agreed to by the servicemember. OSC met the original 60-day deadline or an extended deadline in 42 of 45 cases. During the period covered by our review, OSC took, on average, about 61 days to make a decision on representation and to notify the servicemember of its decision. Figure 5 shows the extent to 5 shows the extent to which OSC met initial and extended deadlines. which OSC met initial and extended deadlines. For cases where OSC asked for and received the complainant’s consent for an extension of time to make a decision on representation, OSC met the last extended deadline in three of the four cases. The longest case, which was delayed because OSC discovered it needed to gather more information in the case, took 240 days to reach a decision on representation. VBIA 2008 requires that DOL, DOJ, and OSC submit quarterly reports to Congress within 30 days of the end of each quarter. Based on our review of the transmittal letters for quarterly reports submitted between October 10, 2008, and December 31, 2009, DOL was late in submitting all five of its quarterly reports, ranging from 4 to 46 days late. DOJ was late in submitting its quarterly reports in four of five quarters of our review by a range of 11 to 40 days. During the period covered by our review, OSC consistently submitted its quarterly USERRA reports on time or before the statutory deadline, from 1 to 3 days early. Table 1 below shows the extent to which each agency was timely in submitting its quarterly report to Congress. DOL officials said that to ensure data accuracy and avoid having to regularly adjust previously-submitted quarterly reports in future reports, DOL reserves 2 weeks after the end of each quarter for staff to finalize database entries on all investigations and referral actions taken through the last day of that quarter. The quarterly report is then drafted and reviewed by the responsible officials. Although recent reports have been late, DOL expects to improve its timeliness in submitting them as it gains more experience in preparing these reports. DOL officials said that it had not communicated with Congress in advance of late submissions. Officials from DOJ’s Civil Rights Division said they typically submit reports 1 to 2 weeks before the statutory deadline to their Office of Legislative Affairs (OLA), which has sole responsibility for communication with Congress. OLA officials said that it takes from 1 week to 1 month for a report to go through OLA’s review process before it can be submitted to Congress. When a report is expected to miss a deadline, OLA officials said that they do not generally communicate with members of Congress or their staff. For DOL, DOJ, and OSC, the data contained in the quarterly reports during the period covered by our review were generally consistent with our analysis. However, the three agencies did not use the same criteria for including the number of cases that exceeded or met the statutory deadline in their quarterly reports. Specifically, DOL and OSC included cases where (1) the applicable statutory deadline occurred within the quarter, or (2) the deadline occurred in a later quarter but the agency met its statutory requirement within that quarter. However, DOJ reports the number of cases that met or exceeded the deadline only for cases where the deadline occurred within the quarter. VBIA 2008 requires that data contained in the reports be categorized in a uniform way. Because the three agencies are not using the same criteria to determine which cases to include in their quarterly reports, Congress may not be able to assess trends across the three agencies. Although the data contained in DOL’s quarterly USERRA reports during the time of our review were generally consistent with our analysis of data from its USERRA database, DOL’s process for identifying and correcting errors in its quarterly reports accounts for some of the differences we found. To prepare its quarterly reports, DOL extracts data on the relevant cases from its USERRA database and generates two separate lists; one for investigations, which are subject to a 90-day deadline, and another for referral requests, which are subject to the 60-day deadline. After both lists have been sorted and analyzed to produce a draft report, the lists are reviewed by DOL officials who oversee investigations and referral processing. From those lists, DOL identifies cases that exceeded the deadline and then reviews documentation for these cases to determine if an extension had been recorded in the file but had not been entered in DOL’s USERRA database. If it identifies such a record, it makes a notation as part of its analysis, but does not always make a correction in its system of record—the USERRA database. We identified four referrals where the USERRA database showed that DOL exceeded the 60-day referral deadline without an extension, but DOL made a written notation in its analysis used to produce its quarterly report reflecting consent to an extension. As of March 1, 2010, the date that DOL extracted the data for this review, DOL had not updated its database to reflect these extensions. After we notified DOL that its USERRA database had not been updated, DOL provided us documentation of consent for extensions in these four cases and updated its USERRA database to reflect the extensions. GAO’s Standards for Internal Control in the Federal Government require that agencies establish a system to ensure the accuracy of data that it processes. These standards state that such a system should employ a variety of control activities to ensure accuracy and completeness, such as using edit checks in controlling data entry and performing data validation and editing to identify erroneous data, among other activities. Because DOL does not consistently make corrections to the data in its USERRA database, DOL cannot ensure it has accurate and readily available data to monitor, track, and report on its performance in meeting VBIA 2008 requirements. A better system to correct its data could help DOL to ensure that it is accurately meeting congressional reporting requirements. Although the data contained in DOJ’s quarterly reports that we analyzed were generally consistent with our analysis of the data from its WordPerfect log, DOJ does not have a standard, repeatable process to input USERRA data and produce its quarterly reports. DOJ relies on one individual to enter the data and prepare its quarterly reports. A supervisory equal opportunity specialist in the Employment Litigation Section of the Civil Rights Division is responsible for inputting all the USERRA data necessary for reporting on timeliness into a WordPerfect log. DOJ does not have any written definition on each data element in the log. When this employee takes leave, the deputy section chief serves as the backup to collect the relevant documents, but does not enter data into the log; the supervisory equal opportunity specialist enters the data upon return from leave. DOJ officials said that no other DOJ employee is knowledgeable about operating the WordPerfect log. Moreover, there is no system to check and ensure that data are entered correctly. To prepare the reports, the supervisory equal opportunity specialist manually counts the number of reports to be included in each category of the report. Although DOJ said that it uses a WordPerfect formula to calculate when the 60-day deadline occurs, DOJ does not use standard formulas or queries to generate the numbers for the reports. Such an approach that requires manual counting may be susceptible to error. We have previously reported on the importance of standard, repeatable procedures for producing reports. Moreover, GAO’s Standards for Internal Control in the Federal Government require that agencies establish a system to ensure the accuracy of data contained in reports. Implementing such a system could help DOJ improve the accuracy of its reports to Congress. Servicemembers who leave their civilian employment to perform military or other uniformed service need to be assured that the agencies assigned to assist them when they believe that their USERRA rights have been violated are processing their complaints in a timely manner. We found that DOL, DOJ, and OSC generally met initial or extended complaint processing deadlines. While all three agencies’ quarterly reports to Congress were generally accurate, the agencies did not use the same criteria for including cases in their quarterly reports. Moreover, DOL and DOJ were sometimes late in submitting quarterly reports to Congress and could improve maintenance of data and reporting on the extent to which they have met statutory deadlines. Specifically, DOL does not maintain data to monitor the extent to which it met the requirement to notify servicemembers of their complaint processing rights within 5 days. Additionally, when DOL identifies errors in its USERRA database when it prepares its quarterly reports to Congress, it does not always correct the database. DOJ does not have a standard, repeatable process to input USERRA data and process its quarterly reports and lacks data reliability checks. Addressing these data maintenance and reporting issues can help agencies ensure that future USERRA quarterly reports are timely, accurate, and clear. We recommend that the Secretary of Labor, Attorney General, and Special Counsel establish consistent criteria for including cases in their quarterly USERRA reports to Congress. We recommend that the Secretary of Labor direct the Assistant Secretary for the Veterans’ Employment and Training Service to ensure that a system is in place to monitor compliance with notification of rights requirements similar to those used to assess compliance with other statutory deadlines, including maintaining data on such compliance; develop guidance and oversight mechanisms to ensure that changes are entered into the USERRA database as the quarterly reporting data are updated; and establish procedures to ensure that quarterly USERRA reports are submitted to Congress within 30 days of the end of each quarter, as required by VBIA 2008. We recommend that the Attorney General establish a system of internal controls for collecting, maintaining, processing, and checking reliability of data for the quarterly reports to Congress; and establish procedures to ensure that quarterly USERRA reports are submitted to Congress within 30 days of the end of each quarter as required by VBIA 2008. To help ensure that that servicemembers who file complaints are adequately being informed of their USERRA complaint process rights in accordance with VBIA 2008, Congress should consider amending USERRA to require DOL to report on the extent to which it is notifying complainants of their USERRA complaint process rights within 5 days of filing a complaint. To help ensure that DOJ handles state cases as expediently as private employer cases, Congress should consider amending USERRA to specifically require DOJ to adhere to the same 60-day deadline for state employer matters that they must adhere to for matters against private employers. To help ensure that servicemembers in state employer cases are kept apprised of the status of DOJ’s decision making without potentially compromising DOJ’s ability to successfully bring suit against state employers, Congress should consider amending USERRA to require DOJ to notify these servicemembers of the status of DOJ’s efforts. To help ensure that Congress is fully apprised of efforts to resolve a case, Congress should consider amending USERRA to require DOJ and OSC to report on additional time taken to resolve a matter after they decline representation. We provided a draft of this report to DOL, DOJ, and OSC for review and comment. In written comments, which are included in appendix II, DOL agreed with our recommendations and provided additional comments on the matter for congressional consideration regarding reporting on notification of rights. Specifically, DOL stated that actions it plans to take to ensure that servicemembers are notified of their rights within 5 days will be sufficient and DOL will notify Congress and GAO of its progress in this regard. Therefore, DOL’s view is that amending USERRA to require reporting on notification of rights is not necessary. While these steps are positive, we continue to believe that providing Congress this information on a regular basis is important for supporting Congress in its oversight role. DOJ, in written comments, which are included in appendix III, agreed with our recommendations to establish consistent criteria for including cases in quarterly USERRA reports and to establish procedures to ensure that quarterly USERRA reports are submitted to Congress within 30 days of the end of each quarter. While DOJ agreed with our recommendation to improve its internal controls for producing its quarterly reports with respect to checking reliability of data, DOJ stated that its procedures for collecting, processing, and maintaining data for the quarterly reports are adequate. We continue to believe that DOJ’s practice of having one person responsible for the collection and maintenance of the data and its process for manually counting claims to be included in the quarterly reports do not provide sufficient internal controls to ensure the continued accuracy of the data reported to Congress. DOJ also expressed serious concern about our matter for congressional consideration to amend USERRA to require DOJ to notify state employee servicemembers of the status of their cases, stating that it is extremely important to maintain its independence in determining whether to file suit in the name of the United States against a state. We continue to believe that it is important that state employees be made aware of the status of their USERRA complaint and that an amendment requiring notification would help ensure that this occurs. The suggested amendment would not require DOJ to request approval from the servicemember to extend deadlines and is consistent with DOJ’s current practice. In our view, this notification requirement would not compromise DOJ’s independence and would reinforce the important distinction between state employee cases, where DOJ represents the interests of the United States, and private employee cases, where DOJ represents the individual servicemember. For cases involving state employees, DOJ would be required to notify servicemembers of the status of their cases, whereas in cases involving private employees, DOJ is required to request approval from the servicemember to extend the deadline for DOJ’s review. DOJ also said that it believed that it was unnecessary to amend USERRA to require reporting on time spent on USERRA referrals after representation has been declined because VBIA 2008 does not require DOJ to engage in conciliation or settlement discussions. However, because VBIA 2008 requirements were enacted in part due to concerns over the length of time it takes to resolve USERRA complaints, the proposed amendment is needed to provide Congress a full picture of the effort that DOJ makes on behalf of servicemembers. OSC, in written comments, which are included in appendix IV, generally concurred with the conclusions and recommendations in our report. However, OSC noted that, regarding the recommendation on establishing consistent criteria for including cases in quarterly USERRA reports, DOJ should adopt the criteria already used by DOL and OSC. As we state in our report, VBIA 2008 called for the agencies to uniformly categorize the data contained in their reports. Whether one way provides greater benefit should be addressed by the agencies. We will send copies of this report to the Attorney General, the Secretary of Labor, the Associate Special Counsel, and other interested parties. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you have questions about this report, please contact me at (202) 512-6806 or at ekstrandl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff who made major contributions to this report are listed in appendix V. Our objectives were to assess the extent to which the Department of Labor (DOL), Office of Special Counsel (OSC), and Department of Justice (DOJ) (1) met Veterans’ Benefits Improvement Act of 2008’s (VBIA 2008) complaint processing timeliness requirements between October 10, 2008, and December 31, 2009, and (2) submitted timely and reliable quarterly reports to Congress as required by VBIA 2008. To assess the extent to which DOL, OSC, and DOJ met VBIA 2008’s complaint processing timeliness requirements, we obtained information on all the USERRA complaints filed with and without referral requests received by DOL from October 10, 2008—the effective date of VBIA 2008— through December 31, 2009. We obtained data from DOL’s USERRA Information Management System on March 1, 2010. We considered cases that were closed as of February 28, 2010, as completed cases, while cases that remained open as of this date were treated as pending cases in our analysis. We obtained 1,663 unique complaints and 205 referrals that met these criteria from DOL. In addition, there were 68 complaints and 34 referrals that remained open as of February 28, 2010. We also obtained data during that same time period on referrals received by OSC generated from its case tracking system, OSC 2000, and by DOJ from the WordPerfect log used by the Employment Litigation Section of its Civil Rights Division. For OSC, we obtained 45 referrals that met these criteria. For DOJ, we identified 201 referrals that met these criteria and four cases that remained open as of February 28, 2010. We first assessed the reliability of the data from databases that each agency uses to maintain data for reporting to Congress under VBIA 2008. To assess the reliability of each of the databases, we compared data from the databases with data found in the official hard copy case files. For DOL and DOJ, we traced data from a random probability sample of cases to the case files. For DOL, our sample included a total of 60 unique cases where the servicemember did not request a referral and 52 cases where the servicemember requested a referral. For DOJ, our sample included 55 cases from a universe of 201 cases. Because OSC received only 45 referrals between October 10, 2008, and December 31, 2009, we compared the data from all 45 cases to the official hard copy case files. For selected data elements related to reporting to Congress, we assessed the reliability of these data elements by attempting to match the data in the databases with the source case files. In addition, for each selected data element, we excluded cases from our data reliability assessment if information was missing from the case file, thus preventing a comparison between data in the databases and the case file. We did not evaluate the accuracy of the source of the case files for the data elements reviewed. For data elements pertaining to time (i.e., open date and closed date), we considered the date a match if the date in the databases was the same or within 1 day of the date reflected in the case file. To assess the reliability of the data elements pertaining to time, we assessed (1) the number of times that the electronic data did not match the hard copy, case file data, (2) the average number of days that the electronic date differed from the hard copy date, and (3) the change in the number of cases exceeding the deadline based on differences between the dates contained in the electronic data and the hard copy data. Based on the collective results of each of these tests, we consider each agency’s data to be sufficiently reliable for the purposes of this report. To determine the extent to which each agency met the complaint processing timeliness deadlines, we used the data from each agency’s database and calculated the average processing time for complaints and referrals received from October 10, 2008, through December 31, 2009, and that closed by February 28, 2010. Because DOL does not maintain data in its USERRA database on the extent to which it notified the claimant of his or her complaint processing rights, we estimated that percentage based on the data gathered from the random probability sample of case files. We reviewed the extent to which there was evidence that DOL notified the servicemember of his or her USERRA complaint processing rights within 5 days of receiving the complaint and the time it took to notify the servicemember. We used four different indicators as evidence of notification: (1) the pen and ink notation at the bottom of the complaint form, (2) an E-mail containing the text of the standard notification, (3) the presence of the enclosure for “Your USERRA Complaint Process Rights,” or (4) a letter or E-mail containing language indicating that notification of rights was enclosed or attached. All percentage estimates presented in this report have a margin of error of plus or minus 11 percentage points or less at the 95 percent confidence level. We also interviewed knowledgeable DOL, DOJ, and OSC officials. At DOL, we interviewed officials from its Veterans’ Employment and Training Service (VETS) National Office, VETS’s Atlanta Regional Office, and DOL’s Office of the Solicitor. At DOJ, we interviewed officials with the Employment Litigation Section of the Civil Rights Division. At OSC, we interviewed officials from the USERRA Unit and Information Technology Branch. Timeliness of Submissions: To determine the timeliness of each agency’s submission of the quarterly reports to Congress, we reviewed the transmittal letters and other documentation of submission to determine whether the quarterly reports were submitted to Congress within 30 days after the end of the quarter. To determine each agency’s policies and procedures for submitting the quarterly reports, we interviewed officials from DOL’s VETS, DOJ’s Office of Legislative Affairs, and OSC’s USERRA Unit. Reliability of Quarterly Reports: To assess the reliability of the quarterly reports, we used data from each agency’s database covering the period of October 10, 2008, through December 31, 2009, and, based on criteria provided by each agency, attempted to recreate the quarterly reports. For each agency’s report, we assessed the accuracy of the tables identifying the number of cases that met the deadline and the number of cases that exceeded the applicable deadline, with and without consent. We did not assess the data contained in the narrative portion of each agency’s reports. We also reviewed each agency’s policies and procedures for collecting, maintaining, and storing the data and for producing the reports, and interviewed officials from VETS’s National Office and Atlanta Regional Office; DOJ’s Employment Litigation Section of its Civil Rights Division; and OSC’s USERRA Unit and Information Technology Branch. DOL: We recreated five quarterly reports by applying DOL’s criteria and using data provided from its USERRA database. For each quarter, we included investigations where the 90-day deadline occurred within the quarter, or the 90-day due date occurred in a later quarter and the close date occurred within the quarter. For referrals, we included cases where the 60-day deadline occurred within the quarter, or the 60-day deadline occurred in a later quarter and the last action on the referral occurred during the quarter. We found some differences between our analysis and the data in the quarterly reports. However, we were generally able to account for the differences. For referrals, these differences were due to DOL’s failure to correct its database to include extensions that DOL identified while reviewing the data extracts prior to submission of its quarterly reports. Specifically, when DOL identifies cases where the latest deadline has been exceeded, DOL reviews documentation for these cases to determine if an extension had been recorded in the file but had not been entered in DOL’s USERRA database. If DOL identifies such a record, it makes a notation as part of its analysis, but it does not always make a correction in its system of record—the USERRA database. Specifically, we found four cases where DOL made a notation in its analysis used to produce its quarterly report, but the information did not appear in the data that we obtained from DOL’s USERRA database. For investigations, differences between our analysis and the data in the quarterly reports may have been due to changes in the status of a case being recorded in DOL’s USERRA database following the end of the quarter in which the case was reported. DOJ: We recreated four of five DOJ reports using criteria provided to us by DOJ and applying those criteria to the data from DOJ’s WordPerfect log. We included cases where the 60-day deadline occurred within the quarter. In addition, we included state cases in our analyses through third quarter, fiscal year 2009—the same quarters that state cases were included by DOJ. We could not recreate DOJ’s quarterly report for first quarter, fiscal year 2009, because DOJ’s WordPerfect log did not contain data on all cases contained in the report—specifically referrals that were received prior to October 10, 2008. Our analysis of the latter four reports showed that DOJ included one additional referral that exceeded the 60-day deadline with consent in second quarter, fiscal year 2009, and one case that exceeded the deadline without consent in third quarter, fiscal year 2009. This case exceeded the deadline by 2 days. Because of the small number of inaccuracies, we found DOJ’s fiscal year 2009 second through fourth quarter and fiscal year 2010 first quarter reports to be generally consistent with our analysis. OSC: We recreated OSC’s report by including cases where the 60-day deadline occurred within the quarter, or the 60-day deadline occurred in a later quarter but OSC completed processing the referral within the quarter. We did not find any discrepancies between our analysis and the data contained in OSC’s quarterly reports. We conducted this performance audit from January 2010 through September 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Laurie E. Ekstrand at (202) 512-6806 or ekstrandl@gao.gov. In addition to the contact named above, individuals making key contributions to this report were Bill Reinsberg, Assistant Director; Jim Ashley; Gerard Burke; Karin Fangman; Donna Miller; Wesley Sholtes; Tamara Stenzel; Jessica Thomsen; and Greg Wilmoth. Military Personnel: Improvements Needed to Increase Effectiveness of DOD’s Programs to Promote Positive Working Relationships between Reservists and Their Employers. GAO-08-981R. Washington, D.C.: August 15, 2008. DOD Financial Management: Adjudication of Butterbaugh Claims for the Restoration of Annual Leave or Pay. GAO-08-948R. Washington, D.C.: July 28, 2008. Military Personnel: Federal Agencies Have Taken Actions to Address Servicemembers’ Employment Rights, but a Single Entity Needs to Maintain Visibility to Improve Focus on Overall Program Results. GAO-08-254T. Washington, D.C.: November 8, 2007. Military Personnel: Considerations Related to Extending Demonstration Project on Servicemembers’ Employment Rights Claims. GAO-08-229T. Washington, D.C.: October 31, 2007. Military Personnel: Improved Quality Controls Needed over Servicemembers Employment Rights Claims at DOL. GAO-07-907. Washington, D.C.: July 20, 2007. Office of Special Counsel Needs to Follow Structured Life Cycle Management Practices for Its Case Tracking System. GAO-07-318R. Washington, D.C.: February 16, 2007. Military Personnel: Additional Actions Needed to Improve Oversight of Reserve Employment Issues. GAO-07-259. Washington, D.C.: February 8, 2007. Military Personnel: Federal Management of Servicemember Employment Rights Can Be Further Improved. GAO-06-60. Washington, D.C.: October 19, 2005. U.S. Office of Special Counsel’s Role in Enforcing Law to Protect Reemployment Rights of Veterans and Reservists in Federal Employment. GAO-05-74R. Washington, D.C.: October 6, 2004.
The Uniformed Services Employment and Reemployment Rights Act of 1994 (USERRA) protects the employment and reemployment rights of individuals who leave their employment to perform uniformed service. Concerned with the timeliness of USERRA complaint processing and data reliability of agency reports, Congress imposed timeliness requirements for the Department of Labor (DOL), Department of Justice (DOJ), and Office of Special Counsel (OSC) under the Veterans' Benefits Improvement Act of 2008 (VBIA 2008) and required agencies to submit quarterly reports to Congress on the extent of their compliance with the requirements. As required by VBIA, this report assesses whether the agencies (1) met VBIA timeliness requirements for USERRA complaint processing, and (2) submitted reliable and timely quarterly reports. GAO analyzed data in each agency's USERRA database, and the extent to which those data were consistent with the quarterly reports. DOL, DOJ, and OSC generally were timely in meeting VBIA 2008 deadlines to process complaints, but issues remain regarding notification of rights. Under VBIA 2008, DOL must complete its investigation within 90 days of receiving a complaint. If the complaint is not resolved and the servicemember requests to have the complaint referred, DOL must send the case to DOJ (if against a nonfederal employer) or OSC (if against a federal employer) within 60 days of receiving the request for referral. Within 60 days of receiving the case from DOL, DOJ, and OSC must make a decision on whether to represent the servicemember. Any of the three agencies may seek consent to extend the applicable deadline. GAO's analysis showed that DOL, DOJ, and OSC generally met the original or extended deadlines to process complaints. Although DOL does not maintain data in its USERRA database on notifying servicemembers of their USERRA complaint processing rights within 5 days of receiving the complaint, GAO estimated that in about 7 percent of the cases, DOL did not document notification of rights. Because VBIA 2008 does not require DOL to report on this requirement and DOL does not maintain and monitor such data, Congress and DOL cannot be assured that all servicemembers are adequately being informed of their USERRA process rights in accordance with VBIA 2008. According to DOJ, the 60-day statutory deadline does not apply to state employer cases. GAO's analysis showed that 6 of 12 cases against state employers took more than 60 days to process. Comparatively, 23 of 189 cases against private or local government employers exceeded the 60-day deadline. Therefore, servicemembers who are employed by state governments may not be receiving the same treatment in terms of timeliness that other servicemembers are receiving under USERRA. In addition, GAO's analysis showed that in 6 of 13 cases where the servicemember was involved in settlement negotiations and DOJ declined representation, DOJ notified the servicemember of its decision but continued to aid the parties with facilitating a settlement. VBIA 2008 does not require agencies to report on their time spent after making a decision on representation. For DOL, DOJ, and OSC, the data contained in the quarterly reports during the time of our review were generally consistent with our analysis. However, the three agencies did not use the same criteria for including the number of cases that exceeded or met the statutory deadline in their quarterly reports. DOL and DOJ were consistently late in submitting quarterly reports to Congress, by as much as 46 days for DOL and by as much as 40 days for DOJ. DOL does not always correct errors in its USERRA database after preparing its quarterly reports and therefore cannot ensure it has accurate, readily available data to monitor its performance in meeting USERRA requirements. DOJ does not have a standard, repeatable process to input USERRA data and produce its quarterly reports. GAO recommends that the three agencies use consistent reporting criteria and that the Attorney General and Secretary of Labor improve maintenance of data. Congress should consider amending USERRA to apply VBIA 2008 deadlines to state cases and add reporting requirements. The agencies generally agreed with GAO's recommendations but expressed concern over some of the matters for congressional consideration.
From its origin in 1956, the Disability Insurance (DI) program has provided compensation for the reduced earnings of individuals who, having worked long enough and recently enough to become insured, have lost their ability to work due to a severe, long-term disability. The program is administered by SSA and is funded through payroll deductions paid into a trust fund by employers and workers. In addition to cash assistance, DI beneficiaries receive Medicare coverage after they have received cash benefits for 24 months. In 2000, about 5 million disabled workers received DI cash benefits totaling about $50 billion, with average monthly cash benefits amounting to $787 per person. To qualify for benefits, an individual must have a medically determinable physical or mental impairment that (1) has lasted or is expected to last at least 1 year or result in death and (2) prevents an individual from engaging in substantial gainful activity. Individuals are considered to be engaged in substantial gainful activity if they have countable earnings at or above a certain dollar level. In addition to determining initial eligibility, the SGA standard also applies to the determination of continuing eligibility for benefits. Beyond a 9-month trial work period and an additional 3-month grace period during which beneficiaries are allowed to have any level of earnings without losing benefits, benefit payments are terminated once SSA determines that a beneficiary’s countable earnings exceed the SGA level. DI benefits are also terminated when a beneficiary (1) dies, (2) reaches age 65, upon which DI benefits are automatically converted to Social Security retirement benefits, or (3) medically improves, as determined by SSA through periodic continuing disability reviews. Under the Social Security Act, the Commissioner of Social Security has the authority to set the SGA level for individuals who have disabilities other than blindness. SSA has increased the SGA several times over the past decade, to $500 per month in 1990 and to $700 per month in July 1999. In December 2000, SSA finalized a rule calling for the annual indexing of the nonblind SGA level to the average wage index (AWI) and recently increased the level to $780 on the basis of this indexing. The SGA level for individuals who are blind is set by statute and indexed to the AWI.Currently, the SGA for blind individuals is $1,300 of countable earnings. Despite considerable disagreement and uncertainty among researchers, policy makers, and disability advocates over the employment effects of the SGA on DI beneficiaries, there is a theoretical basis for believing that the SGA acts as a work disincentive. That is, to maximize income, maintain health insurance coverage, or achieve a desirable labor-leisure tradeoff, beneficiaries may be inclined to limit their work effort to remain eligible for program benefits. This economic rationale is supported by anecdotal evidence from some beneficiaries who have reported that, although they would prefer to work or have greater earnings, they are fearful of doing so because of the severe financial consequences of exceeding the SGA— losing cash benefits and, eventually, Medicare benefits. In addition, some workers with disabilities whose current earnings are above the SGA level, making them ineligible for the DI program, may reduce their earnings to become eligible for DI benefits. Other researchers and policy makers believe that although the SGA level may serve as a work disincentive for some beneficiaries, this disincentive effect is likely to be very limited for several reasons. First, because severe long-term disability is a central criterion for DI eligibility, many DI beneficiaries may be unable to perform any substantial work. Even if they are willing and able to work, beneficiaries may face employment barriers, such as high costs for supportive services and equipment or discrimination. In addition, we reported previously that many beneficiaries are unaware of DI program provisions affecting work, and several researchers we spoke with said that some beneficiaries may not even know how much they are allowed to earn. In terms of the SGA’s effect on those not currently on the DI rolls, disability advocates have stated that workers turn to the DI program only as a last resort and are not inclined to reduce income for the sole purpose of qualifying for benefits. Also, some studies indicate that the difficulty of qualifying for DI benefits–having to limit or cease work for at least 5 months before receiving benefits and undergoing a stringent review to certify one’s condition as severely disabled–may itself be a factor discouraging workers with disabilities from applying for these benefits. Few empirical studies have examined the effects of the SGA on the work patterns of disabled beneficiaries and nonbeneficiaries. Two studies conducted in the late 1970s by SSA researchers found that the SGA level does not have a substantial effect on the work behavior of beneficiaries.These studies examined past increases in the SGA level to assess whether these increases led to greater labor force participation on the part of DI beneficiaries. Neither study identified any clear change in beneficiary earnings as the SGA level increased. However, a study conducted by the Office of Inspector General (OIG) at the Department of Health and Human Services (HHS) found that some beneficiaries who had completed a trial work period subsequently reduced their earnings below the SGA level so they could continue to receive DI benefits. Out of the 100 cases sampled, 18 beneficiaries who were capable of working had quit work or reduced their earnings to maintain DI benefits. In addition, an internal study conducted by SSA researchers examined how the earnings patterns of DI beneficiaries age 55 or older changed after they converted to retirement benefits at age 65. This study found that beneficiaries were more likely to return to work after converting to retirement benefits, which were subject to a more generous earnings limit. This evidence suggests that the SGA standard leads some beneficiaries to work less than they could. Despite the difficulties inherent in comparisons of different programs, studies of earnings limits in other programs may also provide some insights on the effect of the SGA. For example, studies of the retirement earnings test indicate that this limit probably caused some retirees to restrain their earnings in order to avoid having their benefits reduced. However, this “parking” effect appeared to be limited to only a relatively small proportion of the retiree population. For example, one study found that only about 2 percent of insured workers aged 65-69 had earnings at or near the retirement earnings limit. A study of the Supplemental Security Income (SSI) program’s 1619(b) provision also indicates that an earnings limit can result in beneficiaries limiting their work effort. As the 1619(b) earnings threshold was increased, some SSI beneficiaries increased their earnings in line with this threshold, which is consistent with the idea that beneficiaries restrain earnings in order to maintain program (in this case, Medicaid) eligibility. However, this “parking” behavior was limited to only those beneficiaries who had significant earnings—a group comprising about 2 percent of all adult, disabled SSI beneficiaries. Our analysis of SSA data indicates that the work patterns of most DI beneficiaries are unlikely to be affected by the SGA level. For example, from 1985 through 1997, on average, about 7.4 percent of DI beneficiaries who worked had annual earnings between 75 and 100 percent of the SGA level. These beneficiaries comprised only about 1 percent of the total DI caseload. This proportion of beneficiaries with earnings in this range of the SGA remained relatively small even though the number and proportion of DI beneficiaries who work rose dramatically during this period, increasing by almost 80 percent. Although almost one-fourth of working beneficiaries had earnings above the SGA level, most had very low earnings, well below the annualized SGA level. Even among those beneficiaries with earnings near the SGA level in a given year, most experience an eventual reduction in earnings in subsequent years. Nevertheless, some beneficiaries may change their work effort in response to the SGA level. For example, we found that about 13 percent of working beneficiaries who had earnings between 75 and 100 percent of the annualized SGA level in 1985 still had earnings near the SGA level in 1995, even though the SGA had increased from $300 to $500 a month during this period. In addition, about 7 percent of beneficiaries who did not have any earnings in the years immediately preceding their retirement earned income in the one or more years following retirement, when the SGA earnings limit no longer applied. However, while these findings are suggestive of a possible effect on work effort, our analysis could not definitively link beneficiary work patterns to the SGA level due in part to various limitations in SSA data, such as the lack of monthly earnings data. From 1985 through 1997, on average, about 7.4 percent of DI beneficiaries who worked –comprising about 1 percent of the total DI caseload – had annual earnings between 75 and 100 percent of the SGA level (see table 1). On an annual basis, the number of beneficiaries with incomes clustering at or just below the SGA level increased almost fourfold in absolute terms from 15,800 in 1985 to almost 60,000 in 1997. However, the annual percentage of working beneficiaries with earnings between 75 and 100 percent of the SGA level fluctuated from 8.5 percent in 1988 to 5.1 percent in 1990 to 8.9 percent in 1997. The proportion of beneficiaries with earnings at or just below the SGA level remained small even though the proportion of DI beneficiaries who worked rose dramatically, increasing by almost 80 percent between 1985 and 1997 (see table 2). The number of beneficiaries who worked increased from about 220,000 in 1985 to over 675,000 in 1997 and increased as a percent of all DI beneficiaries in every year, including during the 1990- 91 recession. Throughout the period, most working DI beneficiaries had very low earnings. For example, in 1995, the median annual earnings of working beneficiaries were about $2,157 and the majority of working beneficiaries—about 58 percent—earned no more than 50 percent of the annualized SGA level. Although median earnings of working DI beneficiaries were about 15 percent higher in 1997 than they had been in 1985, they remained well below the annualized SGA level. While mean earnings for this group fluctuated between a high of $5,851 in 1985 and a low of $4,697 in 1993, figure 1 indicates that even with the 67 percent increase in the SGA level in 1990, the earnings distribution of DI beneficiaries did not change considerably from 1985 to 1997. We also examined beneficiaries who had earnings above the SGA level to see if, over time, they tended to reduce their earnings to an amount less than but close to the SGA level in order to maintain eligibility for DI benefits. We found that the majority of beneficiaries in 1985 who had earnings exceeding the SGA level eventually experienced a reduction to no earnings or to an amount less than 75 percent of the SGA (see table 3). By 1989, 48 percent of these individuals had no earnings and only 2 percent had earnings between 75 to 100 percent of the annualized SGA level. This indicates that most beneficiaries who at some point have earnings above the SGA level do not subsequently engage in “parking” to remain on the DI rolls. Nevertheless, the large shift that we observed from earnings above the SGA to no or very low earnings does suggest decreasing ability or motivation to work. However, as late as 1997, about 32 percent of these beneficiaries had earnings exceeding the SGA level, indicating that some beneficiaries maintain their ability to achieve relatively substantial earnings. It is unclear why these individuals are able to consistently earn above the SGA level while retaining eligibility for DI benefits. Although beneficiaries in a trial work period or an extended period of eligibility may have earnings that exceed the SGA level, these work incentive periods are time-limited. Only beneficiaries who are blind are permitted, on a continuing basis, to earn above the SGA level that applies to nonblind individuals. However, we could not determine the status of individuals who had earnings exceeding the SGA level because SSA’s principal program data do not reliably identify whether a beneficiary is in a trial work period or extended period of eligibility and do not contain an indicator denoting whether a beneficiary is blind. Among beneficiaries who have earnings at or near, but not exceeding, the SGA level in a given year, most experience a reduction in earnings in subsequent years. For example, of beneficiaries in 1985 who earned between 75 to 100 percent of the annualized SGA level, 47 percent had no earnings by 1989, while the earnings of another 26 percent had fallen to between 1 and 74 percent of the annualized SGA level (see table 4). Nevertheless, about 11 percent of these beneficiaries still had earnings in 1989 between 75 to 100 percent of the annualized SGA level, suggesting that at least some beneficiaries may be attempting to stay close to the SGA without exceeding it. Even after the SGA level was increased in 1990, a small proportion of these beneficiaries continued to have earnings between 75 to 100 percent of the new annualized SGA level. For example, in 1995 about 13 percent of beneficiaries who had earnings between 75 to 100 percent of the annualized SGA level in 1985 still had earnings within this range of the higher annualized SGA level. Our review of the earnings of former DI beneficiaries who were converted to retirement benefits at age 65 also indicates that the work patterns of only a small proportion of beneficiaries are affected by the SGA. For example, we looked at DI beneficiaries who converted to retirement benefits at age 65 between 1987 and 1993. Of those in this group who had no earnings in the 3 years preceding retirement, about 7 percent did have earnings in 1 or more years following retirement (between ages 66 – 68) when the SGA earnings limit no longer applied. While small, the proportion of beneficiaries returning to work after retirement is greater than the proportion of older beneficiaries who return to work while still on the DI rolls. For example, we found that of beneficiaries who had no earnings at ages 55-57, about 3 percent had earnings at ages 58-60. These data suggest that, at least for a limited number of beneficiaries, the SGA may serve as a disincentive to work. For each analysis, the absence of key data elements made it difficult for us to determine the effects of the SGA level. For example, because SSA collects annual rather than monthly earnings data, we could not observe earnings relative to the SGA level on a monthly basis. However, many workers with disabilities may engage in only intermittent work throughout the year. The annual earnings data did not allow us to observe those individuals who only work several months out of the year and, in order to ensure receipt of benefits, “park” at the SGA level in those months. Another data limitation is the difficulty in identifying whether a DI beneficiary is in a trial work period. Without reliable information on the trial work period status of beneficiaries, we could not determine the full range of work incentives and disincentives potentially affecting the earnings of DI beneficiaries. In addition, neither the CWHS nor SSA’s principal administrative file for the DI program (the Master Beneficiary Record) contain data that identify whether a beneficiary is blind. Such a distinction is important to analyses relating to the SGA because blind beneficiaries are subject to a higher SGA limit than nonblind beneficiaries are. Distinguishing blind and nonblind beneficiaries may help explain why a substantial proportion of beneficiaries continue to earn above the nonblind SGA level while retaining DI eligibility. Data and methodological limitations make it difficult to ascertain the effect of the SGA on DI program entry and exit rates. After 1990, the rate of program entry initially increased and then gradually declined. Although some researchers and policy makers believe that an increase in the SGA could encourage more people who are capable of working to enter the rolls, our analysis indicates that most new entrants were either not able or not inclined to increase their earnings or work at all. However, because of data limitations and the wide range of other possible factors affecting program entry, the link between the increase in the SGA level and these trends in entry is unclear. The analysis of program exits indicated that although the number of beneficiaries exiting the program rose over the 7 years after the 1990 increase in the SGA level, the annual rate of exit generally declined. While beneficiary deaths and conversions to retirement benefits accounted for most program exits, the percentage of exits caused by medical improvement or a return to work increased gradually, from 1.9 percent in 1985 to 9.2 percent in 1996, and then rose sharply to 19.9 percent in 1997. However, the aggregation of medical improvement and return-to-work data prevent us from obtaining a full understanding of the link between the SGA and DI program exit behavior. Our analysis showed that the rate of program entry varied between 1990 and 1997, reaching a high of 19.3 percent in 1991 and then gradually declining, except for a slight upward movement in 1996, to a low of 10.3 percent in 1997 (see figure 2). In 1990, there was a discernible jump in the rate of program entry, which continued into 1991. The 1990 and 1991 rates were higher than the rates in any of the pre-1990 years we analyzed. The 1990 increase in the SGA level could have encouraged additional program entry to the extent that individuals with disabilities whose earnings were between the pre-1990 SGA level and the 1990 SGA level could then qualify for benefits. Also, some individuals could have reduced their earnings in order to qualify for DI benefits and then increased their earnings once they became eligible. However, the data we examined indicate that most DI beneficiaries who entered the program between 1990 and 1995 were either not able or not inclined to increase their earnings or work at all after receiving benefits. Relatively few of these new DI beneficiaries—between 2 to 5 percent—increased their earnings above the SGA level within the first 3 years after their initial year in the program and most new beneficiaries had no earnings during these first several years on the rolls. There are a number of factors other than the increase in the SGA level that likely affected the post-1990 DI program entry rates. For example, given that entry rates began to increase in 1988, prior to the 1990 SGA increase, the growth in program entry in 1990 and 1991 may simply represent a continuation of this earlier trend. In our prior work, we described several program factors, such as changes in the criteria for evaluating mental impairment disabilities, that appear to have contributed to this trend. In addition, a general labor force response to the 1990-91 recession might also explain the increase in entry. The recession could have resulted in layoffs of individuals with disabilities, as well as other workers. In response, some of these individuals might have sought entry to the DI program, rather than continuing a job search, even though they were previously able to work and earn above the SGA level. From the data, we cannot differentiate the reason for entry by a beneficiary, and so have no way of determining whether the increase in entry was related to the increase in the SGA level or some other factor. Likewise, the ensuing economic expansion may have helped to ensure continuing work and significant earnings for some disabled workers, thereby reducing the number of workers seeking and receiving DI benefits. In addition, advances in medicine and medical care, along with advances in and increased use of assistive devices and equipment (for example, adapted computers/keyboards), may have allowed some disabled workers to remain gainfully employed. Our analysis of DI program exits indicated that the yearly rate of exit generally declined over the 1990 to 1997 period even though the number of beneficiaries exiting the program was increasing (see figure 2). Program exit is largely driven by beneficiaries’ death or their conversion to retirement benefits, which together account for about 95 percent of aggregate program exits between 1985 and 1997 (see table 5). While medical improvement or return to work gradually increased from 2 to 9 percent of all exits between 1985 and 1996, there was a dramatic increase in the percentage of DI beneficiaries exiting the program in 1997 for these reasons. It is unclear what effect, if any, the SGA may have had on these program exits because, although the data indicate whether the beneficiary reached retirement age or died, they do not indicate whether the beneficiary returned to work or whether a continuing disability review determined that they had medically improved. The large increase in the percentage of beneficiaries returning to work or medically improving for 1997 may be related, in part, to an increase in the number of continuing disability reviews that occurred during 1997. However, a strong economy that drew more DI beneficiaries into the labor force or other factors also may have played a role. Our analysis of DI beneficiary earnings from the mid-1980s to the mid- 1990s suggests that the SGA level may act as a work disincentive for only a small proportion of DI beneficiaries. This is generally consistent with studies of the SGA and of earnings limits in related programs, which indicate that such limits, at most, affect a relatively small proportion of beneficiaries. However, the limitations in the available data mean that our findings should be accepted with caution. The lack of data on monthly earnings; on beneficiaries who are blind or are in a trial work period; and on beneficiaries who return to work, to name only a few areas, all hampered our efforts to arrive at more definitive conclusions. In particular, the lack of data identifying whether a beneficiary is blind precluded us from analyzing the effect of different SGA levels on blind and nonblind DI beneficiaries. We place significance on our finding that the SGA’s effect remained small even as increasing numbers of DI beneficiaries entered the labor force. While the DI program had grown by almost 72 percent from 1985 to 1997, the number of employed DI beneficiaries more than tripled. The number of working DI beneficiaries increased every year, even during the recession of the early 1990s. Yet it is unclear what has been driving this increase in employment. Given that most of these new workers have earnings far below the SGA level and remain at those low levels for many years afterwards, it is unlikely that this increase was caused by an increase in the SGA level. Other possible explanations include a buoyant economy throughout most of this period since 1985, enhanced employment protections for the disabled, increased availability of assistive technology, and a greater acceptance of hiring workers with disabilities by society in general. While this development has important implications for the DI program, the lack of data again makes it difficult for program officials, researchers, and policy makers to gain a better understanding of this phenomenon and reconfigure the DI program’s return-to-work incentives to reinforce this trend. The DI program, program beneficiaries, policy makers, and the general public could all greatly benefit from the collection of data that would facilitate a more comprehensive analysis of critical employment and program policy issues. Therefore, we recommend that the Commissioner of SSA take action to identify the full range of data necessary to assess the effects of the SGA on DI program beneficiaries, develop a strategy for reliably collecting these data, and implement this strategy in a timely manner, balancing the importance of collecting such data with considerations of cost, beneficiary privacy, and effects on program operations. In our study, we noted several key data elements that would be needed for a comprehensive assessment of the effects of the SGA level on program beneficiaries. These include data that identify the monthly earnings of beneficiaries and whether a beneficiary is blind, is participating in a trial work period, or has exited the DI program based on a return to work. Some of these data, such as information identifying whether a beneficiary is blind or is participating in a trial work period, is already collected by SSA but is not reliably recorded and maintained in SSA’s principal DI program data base. Other information, such as monthly earnings data, may be difficult to collect and involve data issues that extend beyond the DI program. There may also be additional information, beyond the data elements we discussed, that SSA may consider necessary for assessing the effects of the SGA. In commenting on a draft of this report, SSA agreed with our recommendation. The agency, while acknowledging that it currently does not have the capability in place to track the employment and earnings patterns of DI beneficiaries, noted that it has made a commitment to collecting and analyzing DI beneficiary data. SSA stated that it is currently reaffirming that commitment and is developing a strategy to improve its efforts to collect such data. (SSA’s comments appear in app. II.) We believe that SSA’s stated commitment to developing improved data on DI beneficiaries’ earnings and employment represents a positive development. Such a commitment should include the development and implementation of a comprehensive strategy that would collect the data required for assessing the earnings and employment of all DI beneficiaries rather than just a subset, such as those who participate in particular programs initiated under the Ticket to Work Act. This strategy should also include additional data elements that would provide insight into our understanding of DI beneficiaries’ employment, such as data identifying beneficiaries who are blind or who are participating in a trial work period. SSA also provided some technical comments. The agency noted that although our report acknowledges various data limitations that affected our analysis, including limitations in SSA’s earnings data, we did not sufficiently emphasize the extent to which these earnings data might include income that is not related to current employment. In addition, SSA stated that our data on reasons for exit, or termination, from the DI program varied from those published by SSA’s Office of the Chief Actuary. Finally, SSA questioned our analysis of beneficiaries whose earnings consistently exceed the SGA level. With regard to our discussion of limitations in the earnings data, we agree with SSA that these limitations are considerable and have noted that throughout the report. In particular, SSA highlighted the potential for SSA earnings records to include income that may not be related to current work. It is unclear whether a substantial portion of the earnings data we analyzed was unrelated to current work. For example, an SSA studystated that the agency’s earnings data may include “certain payments from profit sharing plans.” However, the study also noted that few beneficiaries had actually participated in such plans. In addition, although this study indicated a sizeable discrepancy between SSA earnings data and earnings reported by some beneficiaries in a survey interview, it was unclear whether this discrepancy was due to limitations in SSA data or to limitations inherent in self-reported data. Regarding the differences between our data on the reasons for program exit, or termination, and the data reported by SSA, we acknowledge in the report that SSA data indicate somewhat higher exit rates due to reasons other than death and conversion to retirement benefits. We believe that these differences are likely attributable to the use of different sources of data on program exit. We used the CWHS because it was the most appropriate data set for conducting a longitudinal analysis of beneficiaries’ earnings in relation to the SGA level. Further, although the termination rates we report do differ from SSA’s data, the trends portrayed in our data on exits are, in fact, generally consistent with those indicated in the SSA data. For example, where SSA’s data indicate a 10.5 percentage point increase in program exit due to medical recovery or return-to-work from 1996 to 1997 (from 12.3 percent to 22.9 percent), GAO’s data similarly indicate a 10.7 percentage point increase (from 9.2 percent to 19.9 percent). Given that our discussion of program exits focuses primarily on trends rather than absolute numbers, we believe that our data adequately support our finding. Finally, regarding the issue of some beneficiaries being able to consistently earn above the SGA level, we identified in the report several reasons why some beneficiaries might do so. For example, such beneficiaries may be blind and thus subject to a higher SGA level than nonblind beneficiaries. We also note that without better DI program data, including data identifying whether a beneficiary is blind or in a trial work period, we could not provide a more definitive explanation of this phenomenon. Examination of individual case folders to determine why beneficiaries continued to earn above the SGA level—an approach suggested by SSA--was not a viable option for us on this study given our resources and timeframes for completing the study. SSA also made a few other technical comments, which we incorporated where appropriate. We are sending copies of this report to the Honorable Jo Anne B. Barnhart, Commissioner of Social Security; appropriate congressional committees; and other interested parties. We will make copies available to others on request. This report is also available on GAO’s home page at http://www.gao.gov. If you or your staff have any questions concerning this report, please call me at (202) 512-7215 or Charles A. Jeszeck at (202) 512-7036. Other individuals making key contributions to this report include Mark Trapani, Michael J. Collins, and Ann Horvath-Rose. To conduct our work, we analyzed data from the Social Security Administration’s (SSA) Continuous Work History Sample (CWHS). The CWHS consists of records representing a longitudinal 1 percent sample of all active Social Security accounts. It is designed to provide data on earnings and employment for the purpose of studying the lifetime working patterns of individuals. The data, drawn from SSA administrative data sets, contain information on an individual’s Disability Insurance (DI) eligibility, earnings, and demographic characteristics. We did not independently verify the accuracy of the CWHS data because they were commonly used by researchers in the past and they are derived from a common source of DI program information. From the total sample of 2,955,942 individuals, we selected a subsample of 92,662 individuals who were eligible for DI benefits at some point between 1984 and 1998. To obtain this sample, we excluded individuals whose Social Security record indicated a gap in DI entitlement, DI beneficiary status beginning before age 18 or continuing past age 64, a date of death before their DI beneficiary status, and those not identified as the primary beneficiary. We could not determine the exact date of eligibility because the CWHS only provides eligibility status as of December 31 of each year. Therefore, individuals were included in our analysis only as of their second year of DI eligibility to assure that the earnings we observed occurred only while an individual was in beneficiary status. In addition to our main sample, we selected another subsample of 9,990 DI beneficiaries who reached age 65 during the 1987 to 1993 time period for the purpose of analyzing DI beneficiaries who were converted to retirement benefits. All samples are subject to sampling error, which is the extent to which the sample results differ from what would have been obtained if the whole universe had been observed. Measures of sampling error are defined by two elements—the width of the confidence interval around the estimate (sometimes called precision of the estimate) and the confidence level at which the interval is computed. The confidence interval refers to the fact that estimates actually encompass a range of possible values, not just a single point. This interval is often expressed as a point estimate, plus or minus some value (the precision level). For example, a point estimate of 75 percent plus or minus 5 percentage points means that the true population value is estimated to lie between 70 percent and 80 percent, at some specified level of confidence. The confidence level of the estimate is a measure of the certainty that the true value lies within the range of the confidence interval. We calculated the sampling error for each statistical estimate in this report at the 95- percent confidence level. All percentage estimates from the sample have sampling errors (95 percent confidence intervals) of plus or minus 10 percentage points or less, unless otherwise noted. All numerical estimates other than percentages have sampling errors of 10 percent or less of the value of those numerical estimates, unless otherwise noted. To analyze the effects of the SGA on the earnings of DI beneficiaries, we attempted to determine whether DI beneficiaries engage in “parking,” that is, whether they limit their earnings to a level at or just below the SGA limit in order to maintain eligibility for benefits. If beneficiaries do indeed park, then we would expect to find a clustering of earnings just below the SGA level. The occurrence of such clustering would provide a fairly strong indication that beneficiaries are limiting their employment and earnings to stay in the DI program, thereby reducing program exit. In addition, to the extent that beneficiaries park or otherwise limit their earnings due to a work disincentive effect of the SGA, we would expect an increase in the SGA level to result in a corresponding increase in beneficiaries’ earnings. To determine if earnings clustered around the SGA level, we examined the distribution of earnings both before and after the 1990 increase in the SGA level to see what proportion of beneficiaries had annual earnings at or within 5 percent, 10 percent, and 25 percent of the annualized SGA level. We also tracked those beneficiaries who had earnings near the annualized SGA level in a given year to see if they maintained this level of earnings in subsequent years. In addition, we tracked those beneficiaries who were on the rolls and had no earnings or had earnings below the annualized SGA level prior to 1990 to see if they increased their earnings and clustered around the new annualized SGA level. Finally, we examined beneficiaries who, in a given year, had earnings above the annualized SGA level to see if, over time, they tended to reduce their earnings to an amount near, but below, the SGA to maintain program eligibility. To further analyze whether DI beneficiaries limit their earnings due to the SGA, we observed how these individuals behave once they are no longer subject to the SGA level. We did this by looking at the earnings of DI beneficiaries who reached age 65 and were converted to the Old Age and Survivors Insurance (OASI) program. Once DI beneficiaries reach age 65, they are converted to retired worker status and their benefits are paid from the OASI trust fund. Likewise, they are no longer subject to the SGA limit. If beneficiaries are limiting their earnings due to the SGA, then we would expect them to increase their earnings after retirement at age 65. Therefore, a finding that a significant proportion of former DI beneficiaries return to work or increase earnings after conversion would serve as some evidence for the work disincentive effect of the SGA. For DI beneficiaries who had entered the DI rolls prior to age 62, remained on the rolls until being converted to retirement benefits at age 65, and survived to age 68, we examined their earnings between ages 66 – 68 to determine whether there was an increase in earnings and employment after they left the DI program. To examine the effects of the SGA on DI program entry and exit rates, we looked at the rate of entry and exit both before and after the increase in the SGA. If people respond to the change in the SGA then we might expect the rate of entry to increase after the increase in the SGA level. With the higher SGA level, some individuals with disabilities would now qualify for benefits if their earnings are between the old and new SGA level. Likewise, some individuals with earnings just above the new SGA level may reduce their earnings in order to qualify and then increase their earnings after they become eligible. Therefore, we examined the earnings, through 1997, of new beneficiaries who entered the DI program between 1990 and 1995 to see if they tended to increase their earnings after becoming eligible for benefits. In terms of program exit, we might expect exit rates to decrease after an increase in the SGA level since many working beneficiaries may now be further from the new level and some may even increase their earnings to an amount near the new level (but higher than the old level) without having their benefits terminated. We examined data indicating the reasons that beneficiaries’ exit DI to determine the extent to which program exits resulted from beneficiaries returning to work or medically improving versus retirements or deaths. The absence of key data in the CWHS and in other SSA data sets limited our ability to draw clear conclusions from our analysis. For example, while the SGA is a monthly level, the available earnings data are recorded only on a yearly basis. Therefore, we were not able to analyze DI beneficiaries’ monthly earnings in relation to the actual, monthly SGA limit. Instead, we examined beneficiary earnings in terms of the annualized SGA level; that is, we multiplied the monthly SGA amount by 12 to permit comparison of the monthly limit to the annual data. (For example, the SGA level in 1995 was $500 per month, so the annualized SGA level was $500 multiplied by 12, or $6,000.) As a result, we were not able to identify parking that might have occurred among beneficiaries who, for example, worked for only a few months during the year but limited their earnings to a level near, but not exceeding, the SGA level in each of those months.Nevertheless, our analysis did allow us to identify individuals who consistently have earnings at or near the SGA level. To the extent that beneficiaries are trying to maximize their income–that is, earn as much as they can within a given year while maintaining DI eligibility–there may be a significant number of beneficiaries who have sustained earnings up to the SGA level through much of the year. Another data limitation concerned beneficiaries who are in a trial work period. The trial work period allows beneficiaries to test their ability to work without penalty. Therefore, beneficiaries can earn any amount without being subject to the SGA limit. Neither the CWHS nor other SSA data sets provide a reliable means for identifying beneficiaries in a trial work period. As a result, in our parking analysis, we were not able to distinguish the earnings of beneficiaries who are subject to the SGA limit from those who are not subject to this limit. Although the trial work period allows beneficiaries to earn any amount, there is no reason to believe that all beneficiaries in a trial work period will have earnings greater than the SGA level. An individual’s disability may limit his/her earnings to well below the SGA level. However, we do not believe that this limitation affected our analysis to a great extent because it is unlikely that the earnings of beneficiaries in a trial work period would systematically fall at or near the SGA level and thereby skew our analysis. The identification of blind and nonblind beneficiaries also created a limitation in our analysis. The CWHS does not allow us to distinguish between blind and nonblind DI beneficiaries, which is important since blind beneficiaries are subject to a higher SGA limit. Some of the beneficiaries that we observe earning above the nonblind SGA limit may actually be blind individuals. In addition, if a substantial number of blind beneficiaries had earnings just below the nonblind SGA level, then our analysis could exaggerate the existence of parking. However, this limitation is not likely to have substantially impacted our analysis of parking among nonblind beneficiaries because blind individuals represent only about 2 percent of the DI caseload and therefore probably comprised a very small portion of our sample. Perhaps more importantly, the inability to identify blind beneficiaries means that we could not assess the extent to which they exhibit parking behavior. As a result, our analysis may be understating the extent of parking in the DI program. Finally, the lack of data on impairment-related work expenses (IRWE) also limited our ability to analyze the effects of the SGA level on employment. SSA deducts the cost of certain impairment-related expenses needed for work from earnings when making SGA determinations. The inability to identify IRWE could exaggerate the effect of the SGA on earnings since some beneficiaries near or above the SGA level may not have been at this level once IRWE was subtracted from their earnings. However, the inability to determine IRWE is not likely to have significantly impacted our analysis because SSA officials told us that IRWE was applied in only a very limited number of cases during the years of our analysis Despite these substantial limitations, the CWHS is the best available data set for identifying the basic program information needed to conduct our analysis within acceptable timeframes. The principal alternative data set within SSA—the Master Beneficiary Record—does not lend itself to easy analysis because it is designed to fulfill SSA’s administrative objectives. In particular, we did not choose to use this data set because it would not have provided the longitudinal data that we needed unless it was linked with other SSA administrative files containing DI program information. Linking these complex files would have raised many uncertainties regarding the ultimate quality of the data and would have added substantial time and complexity to our analysis. In addition, non-SSA data sets, such as the Census Bureau’s Current Population Survey, could not serve our needs because, among other limitations, we would not be able to adequately identify DI program participation for most of the years of our analysis. In addition to data limitations, our analysis was also constrained by the lack of any quantitative evaluation of other possible factors affecting the earnings of DI beneficiaries and disabled workers. For example, our analysis does not control for other factors in the economy such as recessions, implementation of the Americans With Disabilities Act (ADA), advances in medicine and medical care, and advances in and increased use of assistive devices and equipment. A recession may increase entry into the DI program, but implementation of the ADA and improvements in medical care and assistive devices and equipment may either decrease entry or increase exit. The inability to control for these factors limited our ability to make clear inferences from the data regarding the effects of the SGA.
The Social Security Administration's (SSA) Disability Insurance (DI) program paid $50 billion in cash benefits to more than five million disabled workers in 2000. Eligibility for DI benefits is based on whether a person with a severe physical or mental impairment has earnings that exceed the Substantial Gainful Activity (SGA) level. SSA terminates monthly cash benefit payments for beneficiaries who return to work and have earnings that exceed the SGA level--$1,300 per month for blind beneficiaries and $780 per month for all other beneficiaries. GAO found that the SGA level affects the work patterns of only a small proportion of DI beneficiaries. However, GAO also found that the SGA may affect the earnings of some beneficiaries. About 13 percent of those beneficiaries with earnings near the SGA level in 1985 still had earnings near the SGA level in 1995, even though the level was increased during that period. The absence of key information identifying the monthly earnings of beneficiaries, their trial work period status, and whether they are blind limited GAO's ability to definitively identify a relationship between SGA levels and beneficiaries' work patterns. Data limitations also make the effect of the SGA on DI program entry and exit rates difficult to isolate. Although the rate of program entry increased in the years immediately following a 1990 increase in the SGA level, it then gradually declined to a level below the pre-1990 entry rates. Since 1990, DI exit rates continue to be driven largely by beneficiary death and conversion to retirement benefits. However, the percentage of all exits caused by improvements in medical conditions or a return to work increased slowly, from 1.9 percent in 1985 to 9.2 percent in 1996, and then rose dramatically to 19.9 percent in 1997. A substantial increase in the number of continuing disability reviews done by SSA may account, in part, for this 1997 upturn, but data limitations preclude GAO from obtaining a full understanding of the link between the SGA and exit behavior.
Under the Communications Act, as amended, FCC regulates interstate and international communications by radio, television, wire, satellite, and cable. FCC regulates these industries by carrying out various activities, including issuing licenses for radio and television broadcast stations; overseeing the licensing, enforcement, and regulatory functions of cellular telephones and other personal communication services; regulating the use of the radio spectrum and conducting auctions of licenses for use of the spectrum; investigating consumer complaints and taking enforcement actions for violations of communications laws and commission rules; addressing public safety, homeland security, emergency management, and preparedness; educating and informing consumers about telecommunications goods and services; and reviewing mergers of companies holding FCC-issued licenses. FCC carries out these responsibilities using its 7 bureaus and 10 offices. Table 2 provides descriptions of each bureau’s responsibilities. To fulfill its responsibilities, FCC requires regulated entities, such as companies and licensees, in the communications industry that it regulates to maintain records, submit information, or disclose information to others. For example, television stations are required to provide FCC with information relating to construction permits, license renewals, and ownership. When collecting and managing information, FCC must adhere to various laws and regulations and coordinate with various entities. Paperwork Reduction Act. The PRA requires agencies, such as FCC, to obtain approval for each information collection instrument that meets the requirements of the PRA from OMB. Before approving a collection instrument, OMB is required to determine that the agency’s collection of information is necessary for the proper performance of the functions of the agency, including whether the information will have practical utility. Consistent with the PRA’s requirements, OMB has established a process to review all proposals by agencies to collect information from 10 or more persons, whether the collections are voluntary or mandatory. OMB’s approval of each information collection instrument usually expires within 3 years, and agencies must periodically ask for an extension until the collection is no longer needed. Records management by federal agencies. As required by statute, the head of each federal agency must establish and maintain an active, continuing program for the economical and efficient management of the agency’s records. The agency must provide for effective controls over the creation, maintenance, and use of records in the conduct of current business. Further, the agency must cooperate with the Administrator of General Services and the Archivist in applying standards, procedures, and techniques designed to improve the management of records; promote the maintenance and security of records deemed appropriate for preservation; and facilitate the segregation and disposal of records of temporary value. Federal Information Security Management Act of 2002 (FISMA). FISMA requires the head of each agency to provide information security protections commensurate with the risk and magnitude of harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of information collected or maintained by or on behalf of the agency. Additional requirements. In addition to the requirements established by the various laws, federal agencies must follow regulations promulgated by agencies such as OMB and the National Archives and Records Administration (NARA). For example, OMB established policy for managing information through its A-130 Circular. NARA provides federal agencies with guidance on the management of records and other types of documentary materials and assists agencies in creating and maintaining accurate and complete records of an agency’s functions and activities and in ensuring the authorized, timely, and appropriate disposition of documentary materials. To develop new rules or modify existing rules, including rules pertaining to information collection instruments, FCC initiates a rulemaking process. When implementing a rulemaking process, FCC must follow the procedures set forth in the Administrative Procedure Act (APA). The APA generally requires a “notice and comment” or “notice and comment rulemaking” process to ensure that stakeholders and the public have adequate opportunity to participate in agencies’ rulemaking processes. In particular, the APA requires agencies, in most cases, to publish a notice of proposed rulemaking in the Federal Register and give interested parties an opportunity to comment on the proposed rule or rule change by providing “written data, views, or arguments.” FCC generally collects information through the following methods. Notice of Inquiry (NOI). FCC releases a NOI to gather information about a broad subject or as a means of generating ideas on a specific issue. Notice of Proposed Rulemaking (NPRM). FCC issues a NPRM to propose new rules or changes to its existing rules. The NPRM must include either the terms or substance of the proposed rule or a description of the subjects and issues involved and seek public comment on the proposal. Further Notice of Proposed Rulemaking (FNPRM). After reviewing comments in the NPRM, FCC can issue a FNPRM regarding specific issues raised in the process. The FNPRM provides an opportunity for the public to comment further on a related or specific proposal. As of April 2009, FCC used 413 OMB-approved information collection instruments to gather information, maintain records, or disclose information; however, the amount of information collected and managed varied by bureau or office. Responsibility for these collections is spread across 10 FCC bureaus and offices (see table 3). The Media, Wireline Competition, and Wireless Telecommunications bureaus are responsible for almost three- quarters of the collections, with 139, 85, and 74 collections, respectively. The estimated number of responses also varies significantly by bureau or office. For example, both the Wireline Competition and Consumer and Governmental Affairs bureaus anticipate over 140 million individual responses annually to their collection instruments, whereas the Enforcement Bureau anticipates fewer than 10,000. The burden associated with submitting the information also varies by bureau or office; according to the PRA, the term “burden” means the time, effort, or financial resources expended by persons to generate, maintain, or provide information to a federal agency. The Consumer and Governmental Affairs Bureau estimates over 39 million hours for its collection instruments, more than the other bureaus and offices combined; other bureaus with over 1 million estimated annual burden hours include the Media, Wireline Competition, and Wireless Telecommunications bureaus. FCC collects a wide variety of information through its 413 OMB-approved information collection instruments. In response to our request, FCC placed each of its 413 collection instruments in a category based on the industry and/or purpose. FCC identified 21 categories, and we further organized these 21 categories into five groups (see table 4). As shown in the table, there is significant variation in the number of collection instruments and estimated number of responses and annual burden hours across the 21 categories. We provide a description of the types of information collection instruments below. Requirements. FCC-defined information collection instruments for requirements span a wide variety of industries, including wireless and wireline telephone, broadcasting, cable, equipment, and public safety. FCC regulations require companies to provide a variety of information. For example, the Wireline Competition Bureau has 58 collection instruments of this type, including employment reports and local number portability for wireline telephone companies. The Wireless Telecommunications Bureau has 59 collection instruments of this type, such as reports on interference. Applications. Regulated entities, such as companies and individuals, seeking to provide certain services must apply for and receive a license from FCC. For example, the Media Bureau gathers license application information from companies seeking to provide radio and television broadcast service. The Office of Engineering and Technology gathers license application and equipment authorization information from companies seeking to market new wireless equipment, such as wireless telephones. Complaints. FCC collects consumer complaints on a variety of problems through OMB-approved information collection instruments. These complaints include a wide variety of problems such as deceptive or unlawful advertising or marketing; obscene, profane, and/or indecent material on broadcast radio or television; slamming, the illegal practice of changing a consumer’s telephone service without permission; and accessibility of communications services to persons with disabilities. Financial and accounting. The Wireline Competition Bureau collects information pertaining to both wireline carrier accounting and the universal service fund. The wireline carrier accounting collections include a variety of company submissions, such as information on rates, costs, investment, and customer satisfaction. The universal service fund collections include submissions necessary to pay into or receive payment from the universal service fund. We also include collection instruments for FCC’s financial operations in this group; these collections include, for example, documents for FCC’s regulatory fees. Other. These information collection instruments pertain to a variety of topics. For example, the Media, Wireline Competition, and Wireless Telecommunications bureaus conduct surveys of cable television operators, companies providing broadband service, and participants in FCC’s spectrum auctions, respectively. The Consumer and Governmental Affairs Bureau collects information pertaining to telecommunications relay service, which allows persons with hearing or speech disabilities to place and receive telephone calls. The Wireless Telecommunications Bureau uses collection instruments to receive applications for participants in spectrum auctions and auction participants seeking bidding credits. FCC has established commissionwide programs, policies, and procedures for the collection and management of information; FCC articulates these policies and procedures in its records management program, forms management program, security policies and procedures, and information system protection. However, since bureaus and offices are the primary users of information, implementing decisions generally occur at the bureau or office level. On the basis of responses to our questionnaires about 30 OMB-approved information collection instruments, FCC’s bureaus and offices collect and manage information in a variety of different ways. FCC has four primary directives that establish procedures for commission staff to follow for collecting and managing information. These directives help ensure FCC’s compliance with governmentwide laws and regulations pertaining to information collection and management, such as the PRA and FISMA. Records management program. By statute, the head of each federal agency must establish and maintain an active, continuing program for the economical and efficient management of all records of the agency. To meet this requirement, FCC established a records management program that sets out the policies, procedures, and activities needed to manage the commission’s recorded information. The objectives of FCC’s records management procedures are to accurately and completely document the policies and transactions of the control the quantity and quality of records produced by the commission; establish and maintain mechanisms of control to promote effective and economical operations of the commission; simplify the activities, systems, and processes of creating, maintaining, and using records; and judiciously preserve and dispose of records. Within the Office of Managing Director, FCC’s Performance Evaluation and Records Management (PERM) staff carry out procedures to establish and oversee the records management program. The procedures require PERM staff to review and evaluate the program by conducting (1) on-site inspections, (2) annual reviews of all bureau and office records control schedules, and (3) reviews of bureau and office submissions of record holdings. Forms management program. FCC has a forms management program to comply with statutory, regulatory, and policy requirements for federal forms. The objectives of the forms management program are to ensure (1) forms are directly linked to accomplishing specific missions of the commission; (2) forms are properly designed with clear instructions to make it easy as possible for respondents to provide information requested in the least amount of time; and (3) forms make effective and efficient use of electronic technologies for creating, collecting, distributing, and using these forms to record, store, and disseminate information. The procedures state that each bureau and office chief is responsible for, among other things, ensuring that forms are created, maintained, and disposed of in conformance with the commission’s records management program. Security policies and procedures. FCC has security policies and procedures for the management and safeguarding of all nonpublic information. FCC has two categories of nonpublic information: 1. “Highly sensitive/restricted” information is defined as information that is highly market sensitive (i.e., disclosure of which is likely to substantially affect the value of securities traded publicly or a company’s market valuation) or other commercial or financial information the commission considers confidential and highly sensitive. For example, according to FCC officials, information that is submitted to FCC’s Disaster Information Reporting System may contain commercial information that could affect competition among wireless, wireline, broadcast, and cable providers and is treated as confidential by FCC. 2. “Internal use only” information is defined as all other nonpublic information not routinely available for inspection. For example, FCC maintains information for internal use only that allows its crisis incident managers to coordinate activities in the telecommunications industry and FCC in the event of a crisis. This internal document has contact information for FCC employees, other federal government agencies, state and local governments, and the communications industry. According to these procedures, the bureau or office creating or using nonpublic information is responsible for determining in which category the information should be placed. FCC’s procedures are designed to safeguard the nonpublic information in all formats including, but not limited to, paper, computer files, e-mails, diskettes, CDs, audio and video recordings, and oral communications. Among other things, the policies and procedures require that nonpublic information must be disposed of in a locked document disposal bin; such bins are located throughout FCC headquarters. Information systems protection. FCC has established policy to help ensure that adequate levels of protection exist for all FCC information systems, including the FCC network, applications and databases, and information created, stored, or processed. FCC’s Chief Information Officer (CIO) has primary responsibility for managing the commission’s policy. The policy states that the CIO’s responsibilities include (1) evaluating and approving the resolution of issues relating to information security, (2) developing and maintaining an agencywide information security program, and (3) training and overseeing personnel with significant responsibilities for information security. FCC also has a Chief Information Security Officer responsible for (1) developing plans for providing adequate information security for networks, facilities, and systems or groups of information systems; (2) conducting periodic assessments of the risk and magnitude of the harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information and information systems that support the operations and assets of the agency; and (3) developing plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. FCC’s policies and procedures for managing information are primarily carried out at the bureau or office level. As the primary users of information, FCC’s bureaus and offices manage most of the commission’s information collected through OMB-approved collection instruments. The previously mentioned records management guidance, which was established by PERM, gives bureau and office chiefs authority to establish their own procedures for managing records and ensuring staff observe guidelines. According to FCC officials, bureaus and offices are allowed to establish their own procedures. However, officials also said that they are not aware of any bureaus and offices that have officially done so. Similarly, officials with the bureaus and offices with whom we spoke said that they use the commissionwide guidance to manage their information and have no additional internal information procedures. According to responses to our questionnaires about 30 OMB-approved information collection instruments, FCC’s bureaus and offices collect and manage information in various ways based on the type of information. As mentioned previously, a reporting entity or third-party entity maintains the information associated with some FCC collection instruments. In those instances, certain questions pertaining to information collection, management, dissemination, and retention and disposal are not applicable. Therefore, we used two questionnaires, one for collection instruments where FCC maintains the information and one for collection instruments where the reporting entity or a third-party entity maintains the information. Of the 30 responses we received, FCC maintains the information for 21 collection instruments; for the remaining 9 collection instruments, the reporting entity or third party maintains the information. Most of the following analysis pertains to the 21 collection instruments where FCC maintains the information. Information collection. Respondents to our questionnaire reported that they collect information in different formats, including electronic, paper, and compact disc (CD). For the 21 collection instruments where FCC maintains the information, 14 respondents reported that the reporting entity submits information to the bureau or office in an electronic format. For example, one respondent reported that cost and revenue information from telephone companies, such as AT&T and Verizon, is submitted electronically to FCC. In three instances, the respondent reported that the bureau or office receives the information in a paper format. For example, 1 respondent reported that entities using certain radio frequency identification devices are required to submit their information on paper to register the location of these devices. Additionally, 3 respondents reported receiving electronic and paper submissions, and 1 respondent reporting receiving both paper and CD submissions. The frequency of the collection also varied among the information collection instruments. Nine information collections are annual. For example, one respondent reported that FCC collects information annually from a sample of cable operators on average rates charged for the basic cable service, cable programming service tiers, and cable equipment. The frequency of the collection for the remaining collection instruments varied, from onetime submissions when filing an application to triennial filings. Information management. After collecting the information, bureaus and offices manage it in various ways. For all 30 collection instruments, 15 respondents reported that the bureau or office stores the information in a database. As we discussed previously, in nine instances the reporting entity or a third party maintains the information. The remaining respondents to our questionnaire reported that the bureau or office stores information in an internal network system or a file cabinet. Respondents to our questionnaire reported that bureaus and offices use several quality control procedures to ensure the accuracy of information. For example, three respondents reported that information systems run validity checks that ensure (1) certain data do not fall outside a reasonable range for that data and (2) all data have been submitted as required. A respondent reported that drop-down menus for individuals submitting data electronically provide checks on the quality of the data, as do pop-up warnings for data entries outside of the expected reasonable range. Other respondents reported that staff review the information for completeness and accuracy. For the 9 collection instruments wherein the reporting entity or a third party maintains the information, 4 respondents reported that the bureau or office may randomly select items for review, request the records be provided to the commission, and review the records for compliance with the commission rules; 3 respondents reported that the bureau or office does not verify the information. In terms of correcting errors, some respondents reported that the bureau or office contacts the individual or organization that submitted the information and asks that entity to make corrections. Other respondents reported that the bureau or office will contact the individual or organization and ask for clarification and update the information internally. Respondents to our questionnaire also identified several approaches the bureaus and offices employ to safeguard information. For the 21 collection instruments where FCC maintains the information, 12 respondents reported that their bureau’s or office’s information collections contain business sensitive or confidential information. Nine respondents reported that the information collections are less sensitive: 6 reported that the information is generally public data, 1 reported that the information is not typically business sensitive or confidential, and 2 reported that the information is not business sensitive or confidential. To ensure the safeguarding of information, 17 respondents reported that the bureau or office limits access to information. For example, 1 respondent who reported that much of the information is business sensitive also reported that access to the information is limited to bureau staff. In addition, staff members are instructed to keep the information and any related notes and worksheets confidential and to keep any paper copies of the information in locked cabinets. Fourteen respondents also reported that the information is safeguarded with data backup and storage, and 2 respondents reported that information is protected by encryption. Two respondents reported that confidential submissions are kept in a locked file cabinet. Information dissemination. FCC disseminates the information gathered through some of the collection instruments we reviewed to the public. Specifically, 15 respondents to our questionnaire reported that the information collected is disseminated through internal or external reports. Of the 15 respondents, 11 reported that some of the information collected is disseminated to the public on FCC’s Web site. For example, 1 respondent reported that information on applications and licenses for experimental use of radio frequencies is publicly accessible. Other respondents reported the public can request the information or view the information at FCC. For example, 1 respondent reported that in order to protect the identity of the entity submitting information, FCC releases redacted information in response to a request for information. Additionally, several respondents reported that internal reports are generated from the information collected. For example, 1 respondent reported that the bureau or office generates internal workload, trend, and management reports from the information. Information retention and disposal. Bureaus and offices collecting information via the collection instruments we reviewed retain the information for a period of 1 year to indefinitely. Specifically, 7 respondents to our questionnaire reported that the bureau or office retains the information indefinitely. For example, 1 respondent reported that although the actual survey forms are kept for 5 years, spreadsheets of information on surveys of license and spectrum auctions are kept indefinitely. Another respondent reported that the information is retained indefinitely because the disposal procedures are not yet in place. We also asked about the procedures for disposing of information. Six respondents reported that information is transferred to the NARA after being retained by FCC for 5 years. Two respondents reported that paper documents are shredded and electronic records are physically destroyed or erased electronically. According to our review of 30 OMB-approved information collections, FCC’s bureaus and offices appear to follow commission- and governmentwide policies and procedures for the collection and management of information. For example, the bureaus and offices conduct quality control procedures for these information collections. However, in prior reports, we have identified weaknesses in FCC’s information collection and management practices, and some stakeholders with whom we spoke noted the same or similar weaknesses. In particular, these reported weaknesses concern FCC’s information collection processes and the estimated burden hours associated with FCC’s information collections. For the 30 information collections that we reviewed, FCC’s bureaus and offices appeared to follow commission- and governmentwide policies and procedures for the collection and management of information. In particular, we compared the 30 responses from our questionnaires with the commission’s internal policies and procedures and federal guidance on information collection and management practices. We found that the bureaus and offices followed the relevant policies and procedures for these 30 information collections. For example, respondents to our questionnaire reported carrying out a variety of commissionwide information management procedures, including the following: Quality control. FCC bureaus and offices responsible for the collections reported using a variety of quality control procedures for managing the collections to ensure the accuracy and integrity of the information in the collections. These quality control procedures include general processes to verify information, such as edit checks; Web-based filing systems, which incorporate presubmission checks for internal consistency; and notification of the filers of erroneous information and the legal obligation to correct the information and resubmit the document. Safeguarding sensitive and confidential information. The bureaus and offices collecting confidential information reported implementing a variety of safeguards. These safeguards include system limitations that restrict access to the information and encryption of the data in information collections. As mentioned previously, in several reports, we have found weaknesses in certain information collection, management, and reporting processes at FCC. In several instances, FCC has not implemented our recommendations. For example, we recommended that FCC consider collecting additional data and developing additional measures to monitor competition for dedicated access service on an ongoing basis; FCC disagreed that it needed to better define competition and collect additional data, although on November 5, 2009, it released a Public Notice inviting comment on an appropriate analytical framework for examining dedicated access. Some stakeholders with whom we spoke also identified certain weaknesses in FCC’s processes. Information collection. We recently reported that when issuing a NPRM to gather public input before adopting, modifying, or deleting a rule, including those rulemakings involving information collection instruments, FCC rarely includes the text of the proposed rule in the notice, an omission that may limit the effectiveness of the public comment process. We recommended that FCC, where appropriate, include the actual text of proposed rules or rule changes in either a NPRM or a FNPRM before the commission votes on new or modified rules to improve the transparency and effectiveness of the decision-making process. Six stakeholders with whom we spoke also expressed concern about FCC’s lack of specificity when proposing the collection of information through the notice and comment process. For example, four stakeholders said that FCC does not initially specify the information that it wants to gather through a proposed collection instrument in the NPRM. Additionally, an official representing a major telecommunications company said that FCC issues NPRMs that do not contain the proposed rule for stakeholders to review and comment on. This official added that NPRMs usually contain a general description of what the rule will be and the companies can submit comments. The lack of specificity in the NPRM makes it harder for stakeholders and the public to provide meaningful input on the proposed information collection instrument. Burden hour estimates. OMB recently released a request for comments on improving implementation of the PRA. In its request, OMB noted that agencies’ estimation methodologies can sometimes produce imprecise and inconsistent estimates of the burdens associated with information collection instruments. In particular, OMB noted that some estimates are not based on sufficiently rigorous or internally consistent methodologies. Additionally, OMB noted that some information collections may impose significant burdens on small businesses. Therefore, OMB sought comment on a variety of topics, including the following: examples of substantially inaccurate burden estimates for information new or improved practices for estimating burden, examples of information collections that inaccurately estimate the impact of burden upon small entities, and whether or not a separate burden estimate should be created for small entities. Seven stakeholders with whom we spoke expressed concern about FCC’s burden hour estimates and the overall burden associated with the commission’s information collections, particularly the burden on small companies. Three stakeholders mentioned that FCC’s burden hour estimates are not accurate. For example, an official with a telecommunications company said that the burden estimates for some of the information collections the company submits are underestimated. In particular, this official said that aggregating and submitting information to FCC on broadband service (FCC Form 477) takes longer than FCC’s estimate; FCC’s estimated average burden hours per response for the Form 477 is 72 hours, yet this official said the time to prepare and submit the Form 477 is off by a factor of 10. Six stakeholders mentioned the burdensome nature of some FCC collections, particularly for small companies. For example, one association said that providing data is a burden for some of the smaller companies, which might have as few as 500 customers. Another official noted that inaccurate estimates can adversely affect small companies, since the additional burden could negatively affect their operations. In general, these stakeholders did not provide concrete examples to substantiate their concerns about the estimated burden. On July 22, 2009, the FCC Chairman directed the Office of Strategic Planning and Policy Analysis (OSPPA) to conduct a top-to-bottom review of the commission’s systems and processes for information collection, processing, analysis, and dissemination. According to the Chairman, he initiated the review to uncover opportunities to improve the commission’s information capabilities. In particular, the Chairman sought information on whether any (1) new information should be collected to support the commission’s mission, (2) existing information reporting requirements could be streamlined or eliminated because they are unduly burdensome or no longer relevant, and (3) existing technological platforms and management processes could be modernized in order to make the commission’s use of information more efficient and effective. The Chairman asked OSPPA to answer 20 questions, including the following: For each bureau and office, what significant information is collected and which information is used most heavily internally or externally? Is there overlap among bureaus or offices with regard to information collection? What formal operational processes exist to manage the full information “life cycle” and are there any bottlenecks? Does FCC make regular efforts to gather best practices from other information collections agencies? What reports does FCC regularly generate to make information available to the public, what are the most important information systems, and what metrics does FCC have to track public consumption of information? According to FCC officials, OSPPA has taken several steps to carry out the Chairman’s request. In particular, OSPPA (1) sought information on the current information collection efforts and future information needs in FCC’s bureaus and offices and (2) identified potential gaps between the current collections and future needs. OSPPA officials said that the current effort will likely identify areas for greater investigation for the bureaus and offices, and that the current effort is the beginning of a multiyear review and transition process. Additionally, the Chairman initiated an assessment of FCC’s database and communications infrastructure. According to the Chairman, an initial review strongly suggested that a significant upgrade will be warranted to bring the commission into the 21st century. The Chairman also stated that an upgrade will permit the commission and its staff to function much more efficiently and facilitate public use of its Web site. FCC also launched an internal online forum where employees can submit ideas for improvement and reform, and FCC plans to launch a section on its Web site allowing the public to offer ideas for reform as well. We provided FCC with a draft of this report for its review and comment. FCC provided written comments, which appear in appendix II. In its written comments, FCC discussed the various efforts under way at the commission to improve its data management processes. FCC also provided technical comments that we incorporated where appropriate. As agreed with your office, unless you publicly announce the contents of the report earlier, we plan no further distribution of it until 30 days from the date of this report. At that time we will send a copy of this report to the Chairman of the Federal Communications Commission. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 2834 or wised@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This report examines (1) the information the Federal Communication Commission (FCC) collects; (2) how FCC collects and manages information; (3) the strengths and weaknesses, if any, in FCC’s information collection and management practices; and (4) the status of FCC’s internal review of its information collection and management practices. To describe the information FCC collects, we obtained and reviewed FCC’s list of information collection instruments approved under the Paperwork Reduction Act (PRA); we reviewed collection instruments that were approved as of April 22, 2009. The list included FCC’s description of the information collection, the PRA number, the name of the bureau or office responsible for managing the collection, and the estimated annual burden hours associated with the collection. We also interviewed FCC officials from seven bureaus and offices, including the Chief Information Officer. We discussed the availability, formats, and special characteristics of FCC’s information collections. To describe how FCC collects and manages information, we reviewed commissionwide directives on FCC’s (1) records management program, (2) forms management program, (3) management of nonpublic information, and (4) information security program. We reviewed the National Institute of Standards and Technology’s guidance on security procedures for information, the Office of Management and Budget’s (OMB) Office of Information and Regulatory Affairs directives on managing and securing information, and the National Archives and Records Administration’s (NARA) guidance for retaining and disposing of information. We also interviewed FCC officials. Additionally, to obtain information on how various FCC bureaus and offices collect and manage information, we developed two questionnaires that covered various aspects of the information life cycle—collection, management, dissemination, and retention and disposal. We developed one questionnaire for collection instruments where FCC retains the information and a second questionnaire for collection instruments where the filing entity or a third party maintains the information. We pretested the questions to determine appropriateness and made revisions based on the results of the pretest. To select the information collection instruments from which we would obtain information via the questionnaires, we initially asked FCC for the repository (e.g., the database where the information resides) associated with each of its information collections; FCC officials said the commission could not readily provide that information because it does not maintain its records in such a manner. In response, we adopted an alternative, multistep approach. We asked FCC to classify the 413 OMB-approved information collection instruments into categories based on activity or use (e.g., licenses and surveys); FCC divided its 413 collection instruments into 21 categories. We determined the average burden hours for each of the 21 categories, based on the estimated annual burden hours for the collection instruments in each category. We established three strata based on the average burden hours (greater than 46,803 hours, 46,803 hours to 17,904 hours, and less than 17,904 hours). We selected one category from each of the first two stratum and two categories from the third strata in order to obtain a mix of collection types and to eliminate collections that received extremely limited submissions. Finally, we judgmentally selected collection instruments from each of these four categories; this process resulted in the selection of 30 information collection instruments. Because of the nature of our selection process, our results can not be used to evaluate FCC’s collection processes overall. Of the 30 collection instruments, FCC maintains the information for 21 collection instruments and the filing entity or a third party maintains the information for the remaining 9 collection instruments. We received responses for all 30 collections. After receiving the 30 responses, we reviewed and analyzed the answers and followed up on selected answers and documentation provided in the questionnaire by interviewing the responsible officials. To describe the strengths and weaknesses in FCC’s information collection and management practices, we compared the 30 responses from the questionnaires with the commission’s internal policies and procedures and federal guidance on information collection and management practices. We also interviewed 19 stakeholders, including representatives from communication companies, industry trade associations, consumer and public interest groups, state regulators, and academic and industry experts. We selected these stakeholders to include a cross section of industries regulated by FCC, including radio and television broadcasters, cable television operators, satellite operators, and wireline and wireless telephone companies, as well as parties representing consumers and regulators that are affected by the commission’s policies and rulemaking. We reviewed prior GAO reports and performed a literature review of best practices for the collection and management of information. To describe the steps FCC is taking to address information management weaknesses, we reviewed a memoranda dated July 22, 2009, from the FCC Chairman initiating a review of the commission’s information management collections and processes. We also reviewed a congressional hearing statement made by the Chairman in which he discussed FCC’s initiatives to improve information management. We met with the Chief of the Office of Strategic Planning and Policy Analysis to discuss progress on the commissionwide information management review the Chairman requested in July of 2009. In addition to the contact listed above, Michael Clements (Assistant Director), Andy Clinton, Mya Dinh, Amy Rosewarne, Don Watson, Mindi Weisenbloom, and Elizabeth Wood made major contributions to this report.
The Federal Communications Commission (FCC) regulates industries that affect the lives of virtually all Americans. FCC-regulated industries provide Americans with daily access to communications services, including wireline and wireless telephone, radio, and television. To ensure FCC is carrying out its mission, the commission requires a significant amount of information, such as ownership and operating information from radio and television stations. In prior reports, GAO has found weaknesses with FCC's information collection, management, and reporting processes. While FCC has taken action, the commission has not implemented all the recommendations associated with information collection, management, and reporting. As requested, this report provides information on (1) the information FCC collects; (2) how FCC collects and manages information; (3) the strengths and weaknesses, if any, in FCC's information collection and management practices; and (4) the status of FCC's internal review of its information collection and management practices. To complete this work, GAO gathered information on FCC's information collection efforts, reviewed information collection and management practices for 30 collection instruments, interviewed agency officials and industry stakeholders, and reviewed relevant laws and guidance. FCC provided comments which discuss its efforts to improve data management. FCC gathers a wide variety of information though information collection instruments. FCC gathers information through 413 collection instruments approved by the Office of Management and Budget (OMB). Through these OMB-approved collection instruments, FCC gathers information pertaining to (1) required company filings, such as the ownership of television stations; (2) applications for FCC licenses; (3) consumer complaints; (4) company financial and accounting performance; and (5) a variety of other issues, such as an annual survey of cable operators. FCC estimates that it receives nearly 385 million responses with an estimated 57 million burden hours associated with the 413 collection instruments. FCC's bureaus and offices collect and manage most commission information following commissionwide programs, policies, and procedures. FCC articulates its commissionwide programs, policies, and procedures in several directives, including its records management program. These directives help ensure FCC's compliance with governmentwide laws and regulations. Since FCC's bureaus and offices are the primary users of information, implementing decisions generally occur at that level. According to GAO's review of 30 information collections, FCC's bureaus and offices collect and manage information in a variety of ways. For example, FCC collects and manages 14 of the 30 information collections electronically, while it collects and manages some information in paper format. FCC disseminates information from 11 of the 30 information collections on its Web site, while it disseminates some information upon request, but in a redacted format. According to GAO's review of 30 information collections, FCC's bureaus and offices appear to follow commission- and governmentwide guidance, such as quality control procedures and safeguards for sensitive information. However, prior GAO reports and some stakeholders identified certain weaknesses with FCC's information collection and management practices. These weaknesses concern FCC's information collection processes and the accuracy of the estimated burden hours associated with FCC's information collections. For example, GAO recently reported that FCC rarely includes the text of a proposed rule in its Notice of Proposed Rulemaking, and stakeholders similarly noted that FCC does not initially specify the information that it wants to gather in the notice; the lack of specificity makes it harder for stakeholders and the public to provide meaningful input on the proposed information collection instrument. Recognizing the need to improve the commission's information practices, in July 2009, FCC's Chairman initiated a review of the commission's systems and processes. The Chairman sought to address whether (1) new information should be collected, (2) existing information reporting requirements could be streamlined or eliminated, and (3) existing technology and management processes could be modernized in order to make the commission's use of information more efficient and effective. FCC staff have taken several steps to implement the review and the effort continues.
In the face of continuing reports of financial management weaknesses across the federal government, including wasteful spending, poor management, and losses totaling billions of dollars, the Chief Financial Officers (CFO) Act of 1990 was signed into law. The act focuses on establishing a leadership structure; improving systems of accounting, financial management, and internal control; and enabling effective management and oversight through the production of complete, reliable, timely, and consistent financial information. With the foundation of the CFO Act and the Government Management Reform Act of 1994 (GMRA), with its goal “to provide a more effective, efficient and responsive government,” along with other federal agency management reform legislation, such as the Government Performance and Results Act of 1993 (GPRA) and FFMIA, a framework was put in place to improve stewardship, accountability, and transparency in the executive branch. Major goals of the reform legislation have included the following: Strengthening internal control. Accountability is part of the organizational culture that goes well beyond receiving an unmodified or “clean” audit opinion on agency financial statements; the underlying premise is that agencies must become more results oriented and focused on internal control. Thousands of internal control problems have been identified and corrected in executive branch agencies over the past two decades. A disciplined and structured approach to assessing and dealing with internal controls over the critical flow of funds through the entire agency provides a mechanism that over time mitigates potential damaging breakdowns in financial integrity and mismanagement of funds. Such breakdowns can affect the ability of the agency or entity to carry out its mission and can severely damage public confidence. Accurate accounting and financial reporting. The CFO Act and FFMIA provide for financial management systems that support reliable financial reporting on the results of operations on a day-to-day basis. This functionality, in turn, supports management decision making on budgets, programs, and overall mission performance and goals. Accurate accounting and financial reporting are also a major element of any effort to achieve auditable financial statements. Improving performance information. A key goal of much of the federal management reform legislation enacted over the past 25 years, such as the CFO Act and GPRA, is the ability to have reliable information to measure performance against mission goals. Federal agencies have made progress in the preparation of annual performance and accountability reports (PAR). By linking financial and performance information, the PARs provide important information about the return on the taxpayers’ investment in agency programs and operations. Enhancing transparency. Achieving clean audit opinions evidencing sound financial management practices is an overall outcome of effective implementation of these reforms. For example, the achievement of a clean audit opinion on the first-ever annual financial statements for the Troubled Asset Relief Program (TARP) was a significant accomplishment. This provided important accountability and transparency to the public regarding TARP activities. Many of the problems that preceded passage of the CFO Act also led us to issue our first high-risk list in 1990, designating certain DOD and other federal programs as high risk because of their vulnerability to fraud, waste, abuse, and mismanagement. DOD areas designated as high risk in 1990 included Supply Chain Management and Weapon System Acquisition, followed by Contract Management in 1992, Financial Management and Business Systems Modernization in 1995, Support Infrastructure Management in 1997, and Business Transformation in 2005. As we reported in our latest high-risk update, DOD is one of the few federal entities that cannot accurately account for its spending or assets and it is the only federal agency that has yet to receive an opinion on at least one of its department-wide financial statements. Without accurate, timely, and useful financial information, DOD is severely hampered in making sound decisions affecting its operations. Further, to the extent that current budget constraints and fiscal pressures continue, the reliability of DOD’s financial information and ability to maintain effective accountability for its resources will be increasingly important to the federal government’s ability to make sound resource allocation decisions. Effective financial management is also fundamental to achieving DOD’s broader business transformation goals in the areas of asset management, acquisition and contract management, and business systems modernization. As we have previously reported, long-standing weaknesses in DOD’s financial management adversely affect the economy, efficiency, and effectiveness of the department’s operations.and related business management and system deficiencies continue to adversely affect its ability to control costs; ensure basic accountability; anticipate future costs and claims on the budget; measure performance; maintain funds control; prevent and detect fraud, waste, and abuse; and address pressing management issues. As we have previously recommended, the successful transformation of DOD’s financial management processes and operations is necessary for DOD to routinely generate timely, complete, and reliable financial and other information for DOD’s pervasive financial day-to-day decision making, including the information needed to effectively (1) manage its assets, (2) assess program performance and make budget decisions, (3) make cost-effective operational choices, and (4) provide accountability over the use of public funds. Since 1990, we have identified DOD supply chain management as a high-risk area in part because of ineffective and inefficient inventory management practices and procedures, weaknesses in accurately forecasting demand for spare parts, and challenges in achieving widespread implementation of key technologies aimed at improving asset visibility.of dollars in spare parts that are excess to current needs, wasting valuable resources. DOD has made moderate progress in addressing its supply chain management weaknesses, but several long-standing problems have not yet been resolved. To provide high-level strategic direction, DOD issued its Logistics Strategic Plan in July 2010, which among other things, established a goal to improve supply chain processes, including inventory management practices and asset visibility. These factors have contributed to the accumulation of billions With respect to inventory management, in November 2010, as required by the Congress, DOD issued its Comprehensive Inventory Management Improvement Plan, which is aimed at reducing excess inventory by improving inventory management practices. We reported in 2012 and 2013 that DOD had made progress in reducing its excess inventory and implementing its Comprehensive Inventory Management Improvement Plan. DOD established overarching goals in the plan to reduce the enterprise-wide percentages of on-order excess inventory, those items already purchased that may be excess due to subsequent changes in requirements, and on-hand excess inventory, those items categorized for potential reuse or disposal. Since DOD was exceeding its initial goals for reducing excess inventory, we recommended that DOD’s efforts would benefit from establishing more challenging, but achievable, goals for reducing excess inventory and that the department periodically reexamine and update its goals. DOD agreed with our recommendations and revised its on-hand excess inventory goal from 10 percent of the total value of inventory to 8 percent in fiscal year 2016. However, DOD did not make any changes to its on-order excess inventory goals and maintained that its current goals of 6 percent of the total value of on-order inventory in 2014 and 4 percent in 2016 were sufficient. Our work determined that DOD has made progress in reducing on-hand and on order excess inventory. For example: Data from the end of fiscal year 2009 showed that of the about $94.5 billion in on-hand inventory, 9.4 percent, or about $8.8 billion, was excess. DOD’s most recent fiscal year-end data from September 2013, showed that of the about $98.9 billion in on-hand inventory, 7.3 percent was considered excess. Data from the end of fiscal year 2009 through 2013 showed that the department had reduced its percentage of on-order excess inventory from $13.6 billion to about $10.2 billion, from 9.5 to 7.9 percent, with $812 million considered as excess. With respect to asset visibility, we found that DOD needs to take additional actions to improve asset visibility, to include completing and implementing its strategy for coordinating improvement efforts across the department for asset tracking and in-transit visibility. In February 2013, we reported that DOD had taken steps to improve in-transit visibility of its assets through efforts developed by several of the defense components, but no one DOD organization was fully aware of all such efforts across the department, because they are not centrally tracked.began developing a strategy for asset visibility and in-transit visibility; however, as of February 2013 this strategy did not include all key elements of a comprehensive strategic plan. We recommended that the department finalize its strategy and in doing so ensure complete, accurate, and consistent information for all in-transit visibility efforts is captured, tracked, and shared, and the strategy contains all of the key elements of a comprehensive strategic plan, including resources and investments and key external factors. DOD agreed with our recommendation and revised and finalized its asset visibility strategy. We are currently reviewing the new strategy and the department’s efforts to improve asset visibility. GAO, Defense Logistics: Improvements Needed to Enhance DOD’s Management Approach and Implementation of Item Unique Identification Technology, GAO-12-482 (Washington, D.C.: May 3, 2012). implementing a comprehensive management approach for using IUID technology. Effective asset management controls are essential for asset accountability and safeguarding and financial reporting on asset values. DOD primarily relies on various logistical systems to carry out both its stewardship and financial reporting responsibilities for an estimated $1.5 trillion in physical assets, ranging from enormous inventories of ammunition, stockpile materials, and other military items to multimillion- dollar weapon systems. These systems are the primary source of information for maintaining visibility over assets to meet military objectives and readiness goals and for financial reporting. However, our prior reports and DOD Inspector General (IG) reports have shown that these systems have serious weaknesses that in addition to hampering financial reporting, impair DOD’s ability to (1) maintain central visibility over its assets; (2) safeguard assets from physical deterioration, theft, or loss; and (3) prevent the purchase of assets already on hand. Collectively, these weaknesses can seriously diminish the efficiency and economy of the military services’ support operations. For example, we have continued to monitor the implementation of the Army’s Logistics Modernization Program (LMP) system, which supports both inventory management and financial reporting. In November 2013, we reported that the Army’s LMP, which replaced two aging Army systems, is supporting the Army’s industrial operations. However, the current system—LMP Increment 1—does not support certain critical requirements, such as automatically tracking repair and manufacturing operations on the shop floor of depots and arsenals. In addition, according to Army officials, the current system will not enable the Army to generate auditable financial statements by 2018, the statutory deadline for this goal. The Army is in the process of developing LMP Increment 2 to, among other things, address some of the identified weaknesses and expects to complete fielding by September 2016. To determine whether the Army is achieving its estimated financial benefits in LMP, we recommended that the Army develop and implement a process to track the extent of financial benefits realized from the use of LMP during the remaining course of its life cycle. The Army agreed with our recommendation and stated that it would develop a process to track the extent of financial benefits recognized within LMP. We are continuing to monitor the Army’s actions. Reliable performance and budget information are essential to ensure that DOD has effectively budgeted for its needs so that operations can proceed smoothly to meet mission readiness demands. Accurate and timely performance and budget information also is critical to effective oversight and decision making on DOD’s numerous reform initiatives. The following examples illustrate some of the serious weaknesses we have identified in our past work on DOD’s performance management and budget information. In our February 2014 report on the audit of the U.S. government’s consolidated financial statements, we discussed as a material weakness, DOD’s inability to estimate with assurance key components of its environmental and disposal liabilities. Deficiencies in internal control supporting the process for estimating environmental and disposal liabilities could result in improperly stated liabilities as well as adversely affect the ability to determine priorities for cleanup and disposal activities and to appropriately consider future budgetary resources needed to carry out these activities. In addition, DOD could not support a significant amount of its estimated military postretirement health benefits liabilities for federal employee and veteran benefits. These unsupported amounts related to the cost of direct health care provided by DOD-managed military treatment facilities. Problems in accounting for liabilities affect the determination of the full cost of the federal governments operations and the extent of its liabilities. DOD is addressing these issues through its implementation of its FIAR Plan. In June 2013, we reported that problems with the accuracy of outstanding work orders at fiscal year-end for the Army’s Industrial Operations activities resulted in inaccurate budget estimates. To the extent that Industrial Operations does not complete work at year-end, the work and related funding are carried over into the next fiscal year. Carryover is the reported dollar value of work that has been ordered and funded by customers but not completed by Industrial Operations at the end of the fiscal year. We found that the Army did not adequately evaluate program needs and performance management constraints or the budgetary impact of the implementation of its LMP when budgeting for its Industrial Operations. As a result, unreliable information on the scope of work and the lack of available parts affected mission readiness. Further, the overstated Industrial Operations carryover amounts resulted in unreliable estimates of Operations and Maintenance funding levels. For example, the Industrial Fund carryover amounts more than doubled from fiscal years 2006 through 2012, exceeding budget estimates by more than $1.1 billion each year. We made three recommendations aimed at implementing the Army’s planned corrective actions to (1) establish a timetable for implementing new policy guidance, (2) improve the budgeting for new orders, and (3) establish procedures for evaluating work orders received to ensure that resources are available to perform the work. DOD agreed with our recommendations and has actions planned or under way to address them. The Office of Management and Budget (OMB) requires that federal agency budget submissions reflect anticipated reductions in improper payments in their PARs or agency financial reports (AFR) pursuant to legal requirements for the estimation of improper payments. For years, DOD has reported over $1 billion annually in improper payments. Improper payments degrade the integrity of government programs, compromise citizens’ trust in government, and drain resources away from the missions and goals of the government. As we reported in May 2013, although DOD has reported billions of dollars in improper payments, it does not know the extent of its improper payments because of flaws in its estimating methodology. We found that DOD’s improper payment estimates reported in its fiscal year 2011 AFR were neither reliable nor statistically valid because of long-standing and pervasive financial management weaknesses and significant deficiencies in the department’s procedures to estimate improper payments. The flawed methodology for estimating improper payments also limits the effectiveness of DOD’s corrective actions. We recommended that DOD take steps to (1) improve improper payment estimating procedures, such as developing valid sampling methodologies and error projections; (2) identify programs susceptible to improper payments and perform a risk assessments; (3) develop and implement corrective action plans in accordance with best practices; (4) implement recovery audits; and (5) ensure the accuracy and completeness of improper payment and recovery audit reporting. DOD agreed with our recommendations and cited planned actions to address them. Reliable information on the cost of operations is critical to provide accountability for and to efficiently and economically manage DOD’s vast resources. Reliable cost information is essential for making important decisions, such as reallocating resources to fighting forces and considering whether to continue, modify, or discontinue programs and activities. However, DOD’s legacy financial management systems were not designed to capture the full cost of its activities and programs, and DOD’s enterprise resource planning (ERP) systems continue to experience schedule slippages and cost overruns and are not estimated to be fully implemented until the end of fiscal year 2016 or later. As our prior work has found, to effectively, efficiently, and economically manage DOD’s programs, its managers need reliable cost information for (1) evaluating programs (for example, measuring actual results of management’s actions against expected savings or determining the effect of long-term liabilities created by current programs); (2) making cost- effective choices, such as whether to outsource specific activities and how to improve efficiency through technology choices; and (3) controlling costs for its weapon systems and business activities funded through working capital funds. The lack of reliable, cost-based information has hampered DOD in each of these areas, as described in the following examples. In a February 2014 report on our audit of the U.S. government’s consolidated financial statements, we reported that DOD was responsible for the majority of the federal government’s inventories and property, plant, and equipment and that DOD did not maintain adequate systems or have sufficient records to provide reliable information on these assets. Further, deficiencies in internal control over such assets could affect the federal government’s ability to fully know the assets it owns, including their location and condition, and its ability to (1) safeguard assets from physical deterioration, theft, or loss; (2) account for acquisitions and disposals of such assets and reliably report asset balances; (3) ensure that the assets are available for use when needed; (4) prevent unnecessary storage and maintenance costs or purchase of assets already on hand; and (5) determine the full costs of programs that use these assets. DOD is addressing these issues through implementation of its FIAR Plan. With the nation facing fiscal challenges and the potential for tighter defense budgets, the Congress and DOD have placed more attention on controlling the billions of dollars spent annually on weapon system operating and support costs, including costs for repair parts, maintenance, and personnel, which account for 70 percent of the total costs of a weapon system over its life cycle. The Selected Acquisition Report (SAR) is DOD’s key recurring status report on the cost, schedule, and performance of major defense acquisition programs and is intended to provide authoritative information for congressional oversight of these programs. Oversight of operating and support costs is important because many of the key decisions affecting these life cycle costs are made during the acquisition process. In February 2012, we reported that DOD’s reports to the Congress on estimated weapon system operating and support costs are often inconsistent and sometimes unreliable, limiting visibility needed for effective oversight of these costs. To enhance the visibility of weapon system costs during acquisition, we recommended that DOD improve its guidance to program offices on cost reporting and also to improve its process for reviewing these costs prior to final submission of the SAR to the Congress. DOD concurred with our recommendations and noted actions it was taking to address them. We are continuing to monitor DOD’s progress in addressing our recommendations. In December 2012, DOD canceled the Air Force’s Expeditionary Combat Support System after having spent more than a billion dollars and missing multiple milestones, including failure to achieve deployment within 5 years of obligating funds. The system was to provide the Air Force with a single, integrated logistics system that was to control and account for about $36 billion of inventory. We issued several reports on this system and found that among other things, the program was not fully following best practices for developing reliable schedules and cost estimates. We also reported that independent Air Force technical evaluations identified operational deficiencies that impaired the system’s efficiency and effectiveness in accounting for business transactions and reporting reliable financial information. Accurate and complete cost information also is key to making effective and economical investment decisions. We reported that one- time implementation costs for Base Realignment and Closure (BRAC) 2005 grew from $21 billion originally estimated by the BRAC Commission in 2005 to about $35.1 billion, or by 67 percent, through fiscal year 2011, primarily because of higher-than-anticipated military construction costs. Military construction costs for the BRAC 2005 round increased from $13.2 billion based on original estimates by the BRAC Commission to $24.5 billion, an 86 percent increase, through fiscal year 2011, while over the same period, general inflation increased by 13.7 percent. In certain cases, DOD did not include some significant military construction requirements that were needed to implement the recommendations as envisioned, resulting in the identification of additional requirements and related cost increases after the recommendations were approved by the BRAC Commission. Consequently, the increase of $11.3 billion in military construction costs drove about 80 percent of the total cost increases of $14.1 billion for BRAC 2005. Further, because some additional requirements were driven by events after the BRAC Commission’s approval, the Congress had limited visibility into the potential costs of the original recommendations. Another reason we identified for the growth in implementation costs over DOD’s initial BRAC estimates was that DOD had difficulties accurately anticipating information technology requirements for many recommendations, leading to significantly understated information technology costs for some BRAC recommendations—particularly those that involved missions with considerable reliance on such capabilities. We made 10 recommendations for improving the BRAC process. DOD concurred with 3 of our recommendations, partially concurred with 2, and did not concur with 5 of them. In disagreeing with certain recommendations, DOD expressed concern that our recommendations precluded optimizing military value and stated that the current process was sufficient to address our concerns. We continue to believe that although DOD’s BRAC process was fundamentally sound, our recommendations did not preclude opportunities for improvements or the potential for cost savings. We recently reported that our analysis of 333 reports related to DOD funds control issued in fiscal years 2007 through 2013 identified over 1,000 funds control weaknesses related to (1) training, supervision, and management oversight; (2) proper authorization, recording, documentation, and reporting of transactions; and (3) business system compliance with federal laws and accounting standards. We found that these weaknesses led DOD to make program and operational decisions based on unreliable data and impaired DOD’s ability to improve its financial management. Specifically, fundamental weaknesses in funds control significantly impaired DOD’s ability to (1) properly use resources, (2) produce reliable financial reports on the results of operations, and (3) meet its audit readiness goals as discussed in the following examples. Continuing reports of violations of the Antideficiency Act (ADA) and other fiscal laws, such as the Purpose Statute, underscore DOD’s inability to assure that obligations and expenditures are properly recorded and do not exceed statutory levels of control. The ADA requires, among other things, that no officer or employee of DOD incur obligations or make expenditures in excess of the amounts made available by appropriation, by apportionment, or by further subdivision according to the agency’s funds control regulations. According to copies of ADA violation reports we reviewed, DOD reported 75 ADA violations from fiscal year 2007 through fiscal year 2012, totaling nearly $1.1 billion. We received reports of 2 additional ADA violations in 2013 totaling $148.6 million. However, we determined that the number of violations and dollar amounts reported may not be complete because of weaknesses in DOD’s funds control and monitoring processes that may not have allowed all violations to be identified or reported. For example, DOD IG reports issued in fiscal years 2007 through 2012 identified $5.5 billion in potential ADA violations that required further investigation to determine whether an ADA violation had, in fact, occurred, or if adjustments could be made to avoid a violation. Further, while DOD’s Financial Management Regulation (FMR) limits the time from identification to reporting of ADA violations to 15 months, our analysis identified several examples of time spans for investigations of potential ADA violations taking several additional months to several years before determinations of actual violations were reported. For example, as of September 30, 2013, three of the DOD IG-reported potential violations totaling $713.1 million could not be fully corrected and, consequently, resulted in $108.8 million in actual, reported ADA violations. To the extent that ADA violations are not identified, corrected, and reported, DOD management decisions are being made based on incomplete and unreliable data. DOD has stated that its major financial decisions are based on budgetary data (e.g., the status of funds received, obligated, and expended). We have found that the department’s ability to improve its budgetary accounting has historically been hindered by its reliance on fundamentally flawed financial management systems and processes and transaction control weaknesses. In its November 2013 AFR, DOD self-reported 16 material weaknesses in financial reporting, noting that it has no assurance of the effectiveness of the related controls. These weaknesses affect reporting on budgetary transactions and balances, including budget authority, fund balance, outlays, and categories of transactions, such as civilian pay, military pay, and contract payments. As a result, we have concluded that DOD’s reports on budget execution and reports on the results of operations that could have a material effect on budget, spending, and other management decisions are unreliable. For example, we found that DOD continues to make billions of dollars of unsupported, forced adjustments, or “plugs,” to reconcile its Fund Balance with Treasury (FBWT). In the federal government, an agency’s FBWT accounts are similar in concept to corporate bank accounts. The difference is that instead of a cash balance, FBWT represents unexpended budget authority in appropriation accounts. Similar to bank accounts, the funds in DOD’s appropriation accounts must be reduced or increased as the department spends money or receives collections that it is authorized to retain for its own use. For fiscal year 2012, DOD agencies reported making $9.2 billion in unsupported reconciling adjustments to agree their fund balances with the Department of the Treasury’s (Treasury) records. DOD’s unsupported reconciling adjustments to agree its fund balances to Treasury records grew to $9.6 billion in fiscal year 2013. We recommended that the Navy develop and implement standard operating procedures for performing FBWT reconciliations with Treasury records and that it provide training on the new procedures to personnel performing FBWT reconciliations. The Navy has actions under way to address our recommendations. Further, we have reported that over the years, DOD has recorded billions of dollars of disbursement and collection transactions in suspense accounts because the proper appropriation accounts could not be identified and charged, generally because of coding errors. Accordingly, Treasury does not accept DOD reporting of suspense transactions, and suspense transactions are not included in DOD component FBWT reconciliations. We have concluded that it is important that DOD accurately and promptly charge transactions to appropriation accounts since these accounts provide the department with legal authority to incur and pay obligations for goods and services. We recommended that the Navy perform periodic testing of systems for reporting transactions to Treasury and prioritize and address identified deficiencies. The Navy agreed with our recommendations and has actions under way to address them. We are monitoring the Navy’s progress. While DOD has actions under way to address its department-wide funds control weaknesses, several are not expected to be completed until 2017. Until fully resolved, these weaknesses will continue to adversely affect DOD’s ability to achieve its goals for financial accountability, including the ability to produce consistent, reliable, and sustainable financial information for day-to-day decision making. Sustained leadership commitment will be critical to achieving success. In commenting on our most recent report released this week, DOD stated that while our report recommended no new actions based on the numerous actions that DOD already has under way, the department’s commitment to building a stronger business environment via its people, processes, and systems remains paramount. In 2005, the Under Secretary of Defense Comptroller/CFO (DOD Comptroller) established the FIAR Directorate, consisting of the FIAR Director and his staff, to develop, manage, and implement a strategic approach for addressing financial management deficiencies, achieving audit readiness, and integrating those efforts with other initiatives. In accordance with the NDAA for Fiscal Year 2010, DOD provides reports to relevant congressional committees on the status of DOD’s implementation of the FIAR Plan twice a year—no later than May 15 and November 15. In August 2009, the DOD Comptroller sought to focus FIAR efforts by giving priority to improving processes and controls that support the financial information most often used to manage the department. Accordingly, the DOD Comptroller revised the FIAR Plan strategy to focus on two priorities—budgetary information and asset accountability. The first priority was to strengthen processes, controls, and systems that produce DOD’s budgetary information. The second priority was to improve the accuracy and reliability of management information pertaining to the department’s mission-critical assets, including military equipment, real property, and general equipment. In May 2010, the DOD Comptroller first issued the FIAR Guidance, which provided the standard methodology for the components to implement the FIAR Plan. According to DOD, the components’ successful implementation of this methodology is essential to the department’s ability to achieve full financial statement auditability. In October 2011, the Secretary of Defense directed the department to achieve audit readiness for its SBR for general fund accounts by the end of fiscal year 2014, and the NDAA for Fiscal Year 2012 required that the next FIAR Plan update include a plan to support this goal. Further, the NDAA for Fiscal Year 2013 made the 2014 target for SBR auditability an ongoing component of the FIAR Plan by amending the NDAA for Fiscal Year 2010 such that it now explicitly refers to describing the actions and costs associated with validating as audit ready both DOD’s SBR by the end of fiscal year 2014 and DOD’s complete set of financial statements by the end of fiscal year 2017. In response to component difficulties in preparing for a full SBR audit, the November 2012 FIAR Plan Status Report and the March 2013 FIAR Guidance included a revision to narrow the scope of initial audits to only current-year budget activity and expenditures on a Schedule of Budgetary Activity. Under this approach, beginning in fiscal year 2015, reporting entities are to undergo an examination of their Schedules of Budgetary Activity reflecting the amount of SBR balances and associated activity related only to funding approved on or after October 1, 2014. As a result, the Schedules of Budgetary Activity will exclude unobligated and unexpended amounts carried over from prior years’ funding as well as information on the status and use of such funding in subsequent years (e.g., obligations incurred and outlays). These amounts will remain unaudited. Over the ensuing years, as the unaudited portion of SBR balances and activity related to this funding decline, the audited portion is expected to increase. However, the NDAA for Fiscal Year 2010, as amended by the NDAA for Fiscal Year 2013, requires that the FIAR Plan describe specific actions to be taken and the costs associated with ensuring that DOD’s SBR is validated as ready for audit by not later than September 30, 2014. We have reported that because the audit of the Schedule of Budgetary Activity is an incremental step building toward an audit-ready SBR, the FIAR Plan does not presently comply with this requirement. Furthermore, all material amounts reported on the SBR will need to be auditable in order to achieve the mandated goal of full financial statement audit readiness by September 30, 2017. It is not clear how this can be accomplished if activity related to funding provided prior to October 1, 2014, remains unaudited. DOD defines an assessable unit as any part of the financial statements, such as a line item or a class of assets, a class of transactions, or a process or a system, that helps produce the financial statements. generally accepted accounting principles. We found that while DOD has made progress toward financial audit readiness, according to DOD’s November 2013 FIAR Plan Status Report, milestone dates for the Navy have slipped and SBR milestone dates for the Army and defense agencies have been compressed, making it questionable whether corrective actions for these DOD components will be completed by September 2014 for all assessable units. Further, the Air Force has revised its milestone dates for achieving SBR audit readiness to the third quarter of fiscal year 2015. With a reported $187.8 billion in fiscal year 2013 General Fund budgetary resources, the Air Force is material to DOD’s SBR, and if the Air Force cannot meet DOD’s September 2014 SBR audit readiness goal, DOD will not be able to meet its goal. This in turn raises substantial concerns about DOD’s ability to undergo an audit on a full set of financial statements for fiscal year 2018. In addition, our recent reports have identified several major challenges to DOD’s ability to successfully implement the FIAR Plan and meet its audit readiness goals. The following discussion summarizes these challenges. Process for identifying and mitigating risks to the FIAR effort. In August 2013, we reported that DOD’s FIAR effort would benefit from a risk management strategy to help program managers and stakeholders make decisions about assessing risk, allocating resources, and taking actions under conditions of uncertainty.six department-wide risks to the FIAR Plan’s implementation: (1) lack of DOD-wide commitment, (2) insufficient accountability, (3) poorly defined scope and requirements, (4) unqualified or inexperienced personnel, (5) insufficient funding, and (6) information systems control weaknesses. DOD officials stated that risks are discussed on an ongoing basis during various FIAR oversight committee meetings; however, the risks DOD initially identified were not comprehensive, and DOD provided no evidence of efforts to identify additional risks. Further, we found little evidence that DOD analyzed risks it identified to assess their magnitude or that DOD developed adequate plans for mitigating the risks. DOD’s risk mitigation plans, published in its FIAR Plan Status Reports, consisted of brief, high-level summaries that did not include critical management information, such as specific and detailed plans for implementation, In January 2012, DOD identified assignment of responsibility, milestones, or resource needs. In addition, information about DOD’s mitigation efforts was not sufficient for DOD to monitor the extent of progress in mitigating identified risks. We concluded that without effective risk management at the department-wide level to help ensure the success of the FIAR Plan implementation, DOD is at increased risk of not achieving its audit readiness goals. We recommended that the department design and implement department- level policies and detailed procedures for FIAR Plan risk management that incorporate the five guiding principles for effective risk management. DOD acknowledged that it does not have a risk management program that is specifically related to its FIAR effort and cited planned actions that if effectively and efficiently implemented, would address some aspects of the five guiding principles of risk management that are the basis for our recommendations. We are continuing to monitor DOD’s actions on our recommendation. Component implementation of the FIAR Guidance. The FIAR Guidance provides a methodology for DOD components to use in developing and implementing their Financial Improvement Plans (FIP). The guidance details the roles and responsibilities of the DOD components, and prescribes a standard, systematic process for assessing processes, controls, and systems. DOD’s ability to achieve department-wide audit readiness greatly depends on its military components’ ability to effectively develop and implement FIPs in compliance with the FIAR Guidance. However, we have reported on concerns with the department’s efforts to implement this methodology. For example, our review of the Navy’s civilian pay and Air Force’s military equipment audit readiness efforts identified significant deficiencies in the components’ execution of the FIAR Guidance, resulting in insufficient testing and unsupported conclusions. We recommended that DOD take various actions to improve the development, implementation, documentation, and oversight of DOD’s financial management improvement efforts. DOD generally concurred with recommendations and noted actions being taken to implement them. We are continuing to monitor Navy and Air Force audit readiness actions. In reviews of other DOD components, we also found internal control weaknesses in DOD’s procedures for maintaining accountability for billions of dollars in funds and other resources. For example, the Army and DFAS could not readily identify the full population of payroll accounts associated with the Army’s $46 billion active duty military payroll because of deficiencies in existing procedures and nonintegrated personnel and payroll systems. We recommended that the Army identify documents needed to support military payroll transactions affecting the pay of millions of active duty Army military personnel and that it develop and implement procedures for maintaining those documents. As a first step, the Army has developed a matrix of supporting documents for its military pay. However, the Army has not yet completed action to populate a central repository with these records. Preliminary results from our ongoing work to assess the Army’s progress in implementing its FIP for budget execution to help guide its SBR readiness efforts indicate that the Army did not fully complete certain tasks in accordance with the FIAR Guidance to ensure that its FIP adequately considered the scope of efforts required for audit readiness. For example, the Army did not consider the risks associated with excluding prior year balances and current year activity associated with legacy systems and did not adequately identify significant SBR activity attributable to service-provider business processes and systems or obtain sufficient information to assess their audit readiness. These activities may continue to represent material portions of future SBRs, which if not auditable, will likely affect the Army’s ability to achieve audit readiness goals as planned. Our review of the Army’s monthly tests to assess the effectiveness of selected budget execution controls show that the Army identified extensive deficiencies, such as a lack of appropriate reviews or approvals, and had an average failure rate of 56 percent for control tests from June 2012 through May 2013, the period covered by our review. Further, the Army’s corrective actions were not linked to specific corrective action plans to address the causes of identified deficiencies. The deficiencies and gaps we have identified in our preliminary findings throughout various phases of the Army’s SBR audit readiness efforts demonstrate a focus on meeting scheduled milestone dates and asserting audit readiness instead of completing actions to resolve extensive control deficiencies. Further, the military services rely heavily on DOD’s internal service providers to perform a variety of accounting, personnel, logistics, and system operations. For example, DFAS performs accounting and disbursement functions for the military services and defense agencies. The FIAR Guidance requires the service providers to have their control activities and supporting documentation examined by the DOD IG or an independent auditor in accordance with Statement on Standards for Attestation Engagements (SSAE) No.16 so that components have a basis for relying on the service provider’s data for their financial statement audits. In August 2013, we reported that DOD did not have an effective process for identifying audit-readiness risks, including risks associated with its reliance on service providers for much of its components’ financial data, and it needed better department-wide documentation retention policies. We identified two DOD component agencies—the Navy and the Defense Logistics Agency (DLA)—that had established practices consistent with risk management guiding principles. Because effective service-provider controls are critical to ensuring improvements in DOD funds control, we recommended that DOD consider and incorporate, as appropriate, Navy and DLA practices in department-level policies and procedures. DOD agreed with our recommendation and is taking actions to address it. DOD has identified contract pay as a key element of its SBR. DFAS, the service provider responsible for disbursing nearly $200 billion annually in the department’s contract pay, has asserted that its processes, systems, and controls over contract pay are suitably designed and operating effectively to undergo an audit. Preliminary results from our ongoing assessment of DFAS’s implementation of its FIP for contract pay audit readiness indicate that DFAS has numerous deficiencies that have not yet been remediated. For example, DFAS did not adequately perform certain planning activities, such as assessing dollar activity and risk factors of its processes, systems, and controls, which resulted in the exclusion of three key processes from the FIP, such as the reconciliation of its contract pay data to components’ general ledgers. As a result, DFAS did not obtain sufficient assurance that the contract disbursements it processed were accurately recorded and maintained in the components’ general ledgers and that the status of DOD’s contract obligations was up- to-date. Although DFAS has asserted audit readiness for contract pay, until it corrects the weaknesses we identified, its ability to process, record, and maintain accurate and reliable contract pay transaction data is questionable. Therefore, our preliminary results indicate that DFAS does not have assurance that its FIP will satisfy the needs of DOD components or provide the expected benefits to department-wide efforts to assert audit readiness for contract pay as a key element of the SBR. In May 2014, we reported that DOD continued efforts to improve its business enterprise architecture (BEA)—a modernization blueprint—and transition plan and modernize its business systems and processes, consistent with key statutory provisions. However, we found that even though DOD has spent more than 10 years and at least $379 million on the architecture, DOD has not yet demonstrated that the BEA has produced business value for the department. For example, while DOD has established a tool that can assist in identifying potential duplication and overlap among business systems, the department has not demonstrated that it has used this information to reduce duplication and overlap. Accordingly, we recommended that the department develop guidance requiring military departments and other defense organizations to use existing BEA content to more proactively identify potential duplication and overlap. DOD agreed with our recommendation. Collectively, the limitations described in our May 2014 report put the billions of dollars spent annually on approximately 2,100 business system investments that support DOD functions at risk. Further, DOD has identified several, multifunctional ERP systems as critical to its financial management improvement efforts. In a 2012 report on four of these ERPs, we found deficiencies in areas such as data quality, data conversion, system interfaces, and training that affect their capability to perform essential business functions. DFAS personnel also reported difficulty in using the systems to perform day-to-day activities. We recommended that DOD ensure that (1) any future system deficiencies identified through independent assessments are resolved or mitigated prior to further deployment of the systems, (2) timelines are established and monitored for those issues identified by DFAS that are affecting their efficient and effective use, and (3) training on actual job processes are provided in a manner that allows users to understand how the new processes support their job responsibilities and the work they are expected to perform.recommendation, stating that based on the nature of an identified system deficiency, it will determine whether to defer system implementation until it is corrected. DOD agreed with our recommendations to establish and monitor timelines and provide training on user roles and responsibilities. We are continuing to monitor DOD’s actions. DOD partially concurred with our first If these business systems do not provide the intended capabilities on schedule, DOD’s goal of establishing effective financial management operations and becoming audit ready could be jeopardized. We recently reported that the Air Force did not meet best practices in developing a schedule for the Defense Enterprise Accounting and Management System (DEAMS) program. We believe that this raises questions about the credibility of the deadline for acquiring and implementing DEAMS to provide needed functionality for financial improvement and audit readiness. We recommended that the Air Force update the cost estimate as necessary after implementing our prior recommendation to adopt scheduling best practices. DOD concurred with our recommendation. A key principle for effective workforce planning is that an agency needs to define the critical skills and competencies that it will require in the future to meet its strategic program goals. Once an agency has identified critical skills and competencies, it can develop strategies to address gaps in the number of personnel, needed skills and competencies, and deployment of the workforce. In April 2014, we reported that DOD is addressing financial management workforce competencies and training through complementary efforts by (1) the Office of the Under Secretary of Defense for Personnel and Readiness (Personnel and Readiness) to develop a strategic civilian workforce plan that includes financial management, pursuant to requirements in the NDAA for Fiscal Year 2010, as amended, and (2) the DOD Comptroller to develop and implement a Financial Management Certification Program, pursuant to requirements in the NDAA for Fiscal Year 2012. Financial management personnel are expected to possess the competencies that are relevant to and needed for their assigned positions. These competencies include fundamentals of accounting, accounting analysis, budget execution, financial reporting, and audit planning and management, among others. Personnel and Readiness is currently working on a competency assessment tool that will be used by the department, including the financial management functional community. The tool is to capture information related to competencies, such as proficiency level, importance, and criticality, and to identify any gaps in support of the Comptroller’s Financial Management Certification Program. Phased implementation of the program began in June 2013, and the current target date for full implementation is the end of fiscal year 2014. The certification program is to be mandatory for DOD’s approximately 54,000 civilian and military financial management personnel and may take up to 2 years to complete, depending on the extent to which an individual’s prior course work and level of experience to meet the new certification requirements. In April 14, 2014, the Deputy CFO stated that the newly implemented Financial Management Certification Program has already enrolled 22,300 financial managers and certified over 30. Without a competent workforce and effective implementation of financial management processes, systems, and controls, DOD and its components are at risk that DOD’s other financial management reform activities will not be successful, resulting in incomplete and unreliable data for decision making. To the extent that these challenges are not addressed, DOD financial management will continue to be at high risk for waste, fraud, abuse, and mismanagement. In conclusion, while DOD has several financial management improvement efforts under way and is monitoring progress against milestones, as the dates for validating audit readiness approach, DOD has emphasized asserting audit readiness by a certain date over making sure that effective processes, systems, and controls are in place to ensure that its components have improved financial management information for day-to- day decision making. However, several significant factors—including DOD component milestone slippages in meeting audit readiness dates; continuing, uncorrected DOD-wide financial management weaknesses; and inadequate risk management efforts—make it increasingly unlikely that DOD’s SBR will be audit ready by September 2014. While establishing and working toward milestones are important to measure progress, DOD should not lose sight of the ultimate goal of implementing lasting financial management reform to ensure that it has the systems, processes, and personnel to routinely generate reliable financial management and other information critical to decision making and effective operations for achieving its missions. Overcoming DOD’s long- standing financial management challenges will require strong commitment and top leadership support. Chairman Carper, Ranking Member Coburn, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff have any questions about this testimony, please contact me at (202) 512-9869 or khana@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. GAO staff members who made key contributions to this testimony include Gayle L. Fischer (Assistant Director), Gregory Marchand (Assistant General Counsel), Arkelga Braxton, Michael Bingham, Francine DelVecchio, Jason Kirwan, Susan Mata, Sheila D. M. Miller, Roger Stoltz, and Heather Rasmussen. Defense Business Systems: Further Refinements Needed to Guide the Investment Management Process. GAO-14-486. Washington, D.C.: May 12, 2014. DOD Financial Management: Actions Under Way Need to Be Successfully Completed to Address Long-standing Funds Control Weaknesses. GAO-14-94. Washington, D.C.: April 29, 2014. Defense Logistics: Army Should Track Financial Benefits Realized from its Logistics Modernization Program. GAO-14-51. Washington, D.C.: November 13, 2013. DOD Financial Management: Ineffective Risk Management Could Impair Progress toward Audit-Ready Financial Statements. GAO-13-123. Washington, D.C.: August 2, 2013. Information Technology: OMB and Agencies Need to More Effectively Implement Major Initiatives to Save Billions of Dollars. GAO-13-796T. Washington, D.C.: July 25, 2013. Army Industrial Operations: Budgeting and Management of Carryover Could Be Improved. GAO-13-499. Washington, D.C.: June 27, 2013. DOD Financial Management: Significant Improvements Needed in Efforts to Address Improper Payment Requirements. GAO-13-227. Washington, D.C.: May 13, 2013. Major Automated Information Systems: Selected Defense Programs Need to Implement Key Acquisition Practices. GAO-13-311. Washington, D.C.: March 28, 2013. Military Bases: Opportunities Exist to Improve Future Base Realignment and Closure Rounds. GAO-13-149. Washington, D.C.: March 7, 2013. Defense Logistics: A Completed Comprehensive Strategy is Needed to Guide DOD’s In-Transit Visibility Efforts. GAO-13-201. Washington, D.C.: February 28, 2013. High-Risk Series: An Update. GAO-13-283. Washington, D.C.: February 2013. Defense Logistics: DOD Has Taken Actions to Improve Some Segments of the Materiel Distribution System. GAO-12-883R. Washington, D.C.: August 3, 2012. Military Base Realignments and Closures: Updated Costs and Savings Estimates from BRAC 2005. GAO-12-709R. Washington, D.C.: June 29, 2012. Defense Inventory: Actions Underway to Implement Improvement Plan, but Steps Needed to Enhance Efforts. GAO-12-493. Washington, D.C.: May 3, 2012. Defense Logistics: Improvements Needed to Enhance DOD’s Management and Approach and Implementation of Item Unique Identification Technology. GAO-12-482. Washington, D.C.: May 3, 2012. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Given the federal government's continuing fiscal challenges, it is more important than ever that the Congress, the administration, and federal managers have reliable, useful, and timely financial and performance information to help ensure fiscal responsibility and demonstrate accountability, particularly for the federal government's largest department, the Department of Defense. GAO has previously reported that serious and continuing deficiencies in DOD's financial management make up one of three major impediments to achieving an opinion on the U.S. government's consolidated financial statements. GAO's statement focuses on (1) the effect of continuing financial management challenges on DOD management and operations and (2) DOD's efforts to improve financial management and its remaining challenges. GAO's statement is primarily based on previously issued reports, including GAO's updates on DOD high-risk areas and its audit reports on DOD's financial management, inventory management and asset visibility, weapon system costs, business transformation, and business system modernization. Long-standing weaknesses in the Department of Defense's (DOD) financial management adversely affect the economy, efficiency, and effectiveness of its operations. The successful transformation of DOD's financial management processes and operations will allow DOD to routinely generate timely, complete, and reliable financial and other information for day-to-day decision making, including the information needed to effectively (1) manage its assets, (2) assess program performance and make budget decisions, (3) make cost-effective operational choices, and (4) provide accountability over the use of public funds. Examples of the operational impact of DOD's financial management weaknesses include the inability to properly account for and report DOD's total assets—about 33 percent of the federal government's reported total assets—including inventory ($254 billion) and property, plant, and equipment ($1.3 trillion); the inability to accurately estimate the extent of its improper payments because of a flawed estimating methodology, which also limits corrective actions; inconsistent and sometimes unreliable reports to the Congress on estimated weapon system operating and support costs, limiting visibility needed for effective oversight of these costs; and continuing reports of Antideficiency Act violations—75 such violations reported from fiscal year 2007 through fiscal year 2012, totaling nearly $1.1 billion—which emphasize DOD's inability to ensure that obligations and expenditures are properly recorded and do not exceed statutory levels of control. DOD has numerous efforts under way to address its long-standing financial management weaknesses. The Congress has played a major role in many of the corrective actions by mandating them in various fiscal year National Defense Authorization Acts. However, improving the department's financial management operations and thereby providing DOD management and the Congress more accurate and reliable information on the results of its business operations will not be an easy task. Key challenges remain, such as identifying and mitigating risks to achieving the goals of DOD's Financial Improvement and Audit Readiness (FIAR) effort and successfully implementing the FIAR Guidance at the DOD component level, modernizing DOD's business information systems, and improving the financial management workforce. DOD is monitoring its component agencies' progress toward audit readiness. However, as dates for validating audit readiness approach, DOD has emphasized asserting audit readiness by a certain date instead of making sure that effective processes, systems, and controls are in place, without which it cannot ensure that its components have improved financial management information for day-to-day decision making. While time frames are important to measuring progress, DOD should not lose sight of the ultimate goal of implementing lasting financial management reform to ensure that it can routinely generate reliable financial management and other information critical to decision making and effective operations. GAO has previously made numerous recommendations for improving financial systems and business systems that provide financial information as well as related processes and internal controls. DOD has generally agreed with GAO's recommendations and is taking actions to address many of them.
Medicare hospice benefit services include nursing services, services provided by a physician to a hospice, drugs and medical supplies necessary for treating pain and other symptoms of a terminal illness, as well as dietary, spiritual, and bereavement counseling; medical social worker services; homemaker services; and short-term inpatient care both to provide respite for caregivers and to treat a patient’s symptoms. Volunteers are an important resource in delivering hospice care; Medicare requires each hospice to have volunteers provide services equal to at least 5 percent of the total paid patient care hours. The specific services that a patient should receive are outlined in a plan of care, vary based on the type and intensity of the patient’s symptoms and psychosocial needs and the needs of the patient’s caregiver, and may vary throughout the hospice stay as the patient’s condition changes. To be eligible for the Medicare hospice benefit, a patient must be certified by a physician as having a life expectancy of 6 months or less if his or her terminal illness runs its normal course. A Medicare patient who elects hospice care must waive Medicare coverage for all other services related to the terminal illness, although the patient retains coverage for services to treat other conditions. A patient may opt out of the hospice benefit and return to traditional Medicare at any time; a patient may also reelect hospice coverage at a later date. While there is no limit on the number of days an individual can receive hospice care, the prognosis of the patient’s terminal illness must be reaffirmed after the first 90 days, the first 180 days, and then every 60 days thereafter. Under the law, Medicare pays hospices a daily rate that covers all services provided to the patient. HCFA developed four hospice per diem payment categories, which reflect the intensity of the services and the location of service delivery. In 1986, annual updates to the rates for the four payment categories were set in law. A typical day of care provided in a patient’s residence is paid as RHC, and in 2001, the vast majority of hospice care days, 96 percent, were billed as RHC (see table 1). Unless a hospice provides CHC, IRC, or GIC, it is paid the RHC rate for each day the patient is under its care. Hospice care delivered during periods of crisis can be paid as CHC if the care is provided in the home for at least 8 hours within a 24-hour period beginning at midnight and at least half the care hours are delivered by a nurse. To provide respite for primary caregivers, IRC can be provided for up to 5 consecutive days in an inpatient setting. Inpatient care for symptoms that cannot be treated in the patient’s residence is paid as GIC. Hospices provide the care in their own inpatient units or arrange with hospitals, skilled nursing facilities, or other inpatient facilities to provide these services. The payment rate is adjusted by a wage index, which varies based on the patient’s residence, to account for geographic differences in wage costs. The hospice payment categories and their corresponding payment rates were developed from cost data from the 26 hospices that participated in the 1980 to 1982 Medicare demonstration. To calculate the payment rates, HCFA used cost data to identify the cost factors that contributed to providing hospice services and summed the mean cost per day of each cost factor for each of the four categories of hospice care. The costs of bereavement and volunteer services were not included in the rates. By law, hospices are subject to an annual aggregate Medicare payment cap that was meant to ensure that payments for hospice care would not exceed what Medicare would have paid if patients had been treated in a traditional setting, such as a hospital. Total annual payments to a hospice may not exceed a per-patient amount multiplied by the number of Medicare patients who received care from that hospice during the year. The 2004 cap amount for the 12 months beginning November 1, 2003, is $19,635.67 per Medicare patient. Hospice patients, services, and providers have changed since the demonstration. For example, the mean patient length of stay at hospices participating in the Medicare demonstration from 1980 to 1982, was 70 days; in 2001, the mean length of stay was about 50 days. While demonstration costs were based only on Medicare patients with cancer diagnoses, patients with noncancer diagnoses, who may require a different mix of services, represented approximately half of all hospice patients in 2000. In addition, hospice providers have stated that advances in end-of- life care, notably new, more costly pain-management drugs and palliative chemotherapy and radiation, have increased the costs of providing care to certain types of patients. The mix of hospice providers today differs from the provider types in the demonstration. In the demonstration, the predominant type of hospice provider was hospital-based, whereas in 2001, the predominant type was freestanding (see fig. 1). More recently, the proportion of for-profit hospices increased from almost 13 percent of all hospices in 1992 to almost 28 percent in 2001, and the percentage of hospices serving patients primarily living in rural areas rose from 32 in 1992 to 38 in 2001. In addition, the number of hospices participating in Medicare grew from 1,208 in 1992 to 2,275 in 2002, the most recent data available. We determined that for freestanding hospices, the unadjusted per diem payment rate across the four payment categories was about 8 percent higher than estimated average per diem costs in 2000, and over 10 percent higher in 2001. For the payment categories, we estimate that the home care (RHC and CHC) per diem payment rate was almost 10 percent higher than average home care per diem costs in 2000, and over 12 percent higher in 2001. We estimate that the IRC payment rate was almost 53 percent lower than average IRC per diem costs in 2000, and 61 percent lower in 2001. In both years, the GIC payment rate was about 7 percent higher than average GIC per diem costs. In 2000, we estimate that average per diem costs for small hospices were over 13 percent higher than for medium hospices and almost 7 percent higher than for large hospices. In 2001, average per diem costs for small hospices were over 15 percent higher than for medium hospices and almost 8 percent higher than for large hospices. With the exception of average GIC per diem costs in 2000, small hospices also had higher average per diem costs than medium or large hospices for each payment category. Medicare’s hospice payment rate, across the four payment categories and unadjusted for geographic differences in wages, was higher than freestanding hospices’ estimated average per diem cost. The unadjusted payment rate was about 8 percent higher than average per diem costs in 2000, and over 10 percent higher in 2001 (see fig. 2). The 25 percent of hospices with the lowest average per diem costs had costs that were at least 27 percent below the unadjusted payment rate in 2000, and at least 31 percent below the unadjusted payment rate in 2001. However, in 2000, average per diem costs for almost 34 percent of freestanding hospices and, in 2001, almost 32 percent of freestanding hospices, were higher than the unadjusted per diem rate. The costs of individual hospices differ depending on the mix of services provided. In addition, the payments to individual hospices differ because of the wage adjustment and the mix of payment categories billed. We could not determine the relationship between payments and actual costs for individual hospices because of data limitations in the hospice cost reports and claims data. Unlike those for other providers, Medicare’s hospice cost reports do not include Medicare payment information. In addition, Medicare hospice claims data contain only the total payment for all services provided during the billing period, including physician services, not the payment for each hospice payment category. The specific relationship between payment rates and costs for freestanding hospices varied among payment categories. For home care (RHC and CHC) days, we estimate that in 2000, the unadjusted per diem payment rate for freestanding hospices was almost 10 percent higher than the average per diem cost of over $92. In 2001, the per diem payment rate was over 12 percent higher than the average home care per diem cost of over $96. Nonetheless, about 35 percent of freestanding hospices in 2000, and over 32 percent in 2001, had average home care per diem costs that were higher than the home care per diem payment rate. We estimate that in 2000, the unadjusted IRC per diem payment rate for freestanding hospices was almost 53 percent lower than the average IRC per diem cost of about $218. In 2001, the IRC per diem payment rate was over 61 percent lower than the average IRC per diem cost of over $279. However, the GIC per diem payment rate was higher than average GIC per diem costs for freestanding hospices; it was over 7 percent higher than costs in both years. In addition, average per diem costs for IRC and GIC varied widely among freestanding hospices. Our estimates of average IRC and GIC per diem costs may understate actual costs because of data limitations. IRC costs may be much higher than the IRC payment rate because the hospice continues to provide services and visits to the patient in addition to paying the inpatient facility. Our analysis of the proprietary 2002 patient-specific visit data found that the number and type of visits provided per day to patients during IRC days were comparable to the number and type of visits per day to patients during RHC days. In 2001, IRC accounted for 0.2 percent of hospice days of care. We estimate that for 2000 and 2001, small freestanding hospices had higher average per diem costs than medium and large freestanding hospices. In 2000, average per diem costs for small hospices were more than 13 percent higher than for medium hospices and almost 7 percent higher than for large hospices. In 2001, average per diem costs for small hospices were more than 15 percent higher than for medium hospices and almost 8 percent higher than for large hospices (see table 2). With the exception of average GIC per diem costs in 2000, small hospices’ average per diem costs were higher than medium and large hospices’ costs for each individual payment category for both years. Cost disparities across providers of different sizes were greatest for IRC and GIC. As small freestanding hospices are more likely than other hospices to be located in rural areas, they are more likely to receive lower Medicare payments because the wage index adjustment generally reduces the payment rates for providers in rural areas. In 2001, 60 percent of small freestanding hospices were located in rural areas, while 35 percent of medium freestanding hospices and 10 percent of large freestanding hospices were located in rural areas. The structure of the hospice payment system may not reflect how hospices currently deliver services. For example, our analysis of the relative costs for freestanding hospices for different services provided during RHC days, the most common payment category, showed they have changed considerably since the payment rate was initially calculated, suggesting that the services delivered or the resources necessary for those services have changed over the years. In addition, our analysis of proprietary 2002 patient-specific visit data showed that visit frequency varied during the hospice stay, although the rate for each payment category does not. Also, the mean length of stay has decreased. Hospice officials raised concerns about some of the payment policy requirements for CHC and IRC, although our analysis of the limited available data could not confirm that the requirements restrict hospices’ ability to provide care. Finally, the annual aggregate cap was intended to help limit Medicare spending for all hospices, but it was not based on actual hospice costs, and for each year from 1999 through 2002, few hospices reached it. The relative costs of services in 2001 have changed considerably since the payment rate was developed in 1983, suggesting that the services delivered or the resources necessary for those services have changed over time. Specifically, the proportions of RHC costs attributable to nursing, drugs, social services, and durable medical equipment (DME) have increased, while the proportions attributable to home health aide services, supplies, and outpatient services have decreased (see fig. 3). In our analysis, this pattern is present across freestanding hospices of all sizes and locations. The largest cost increase occurred for drugs, which rose from 3 to 15 percent of RHC costs over this period. Hospice officials we spoke with stated that this increase was due in part to the introduction of new, more costly medications. Some stated that drugs have become one of their greatest cost pressures. Hospice visits are particularly concentrated at the beginning and end of a hospice stay, yet the payment rate of each category does not vary throughout a hospice stay. Our analysis of the 2002 patient-specific visit data showed that patients have a higher mean number of visits per day during the first, and especially the last, week of a stay. As a result, the costs of care are higher both at the beginning and end of a hospice stay. Officials from almost all hospices with whom we spoke also reported this pattern. They told us that at the beginning of a hospice stay they provide more visits because the patient’s symptoms, including pain, must be stabilized and the family must be educated about the patient’s care. Near the end of life, hospice officials indicated that the patient’s symptoms and needs change, usually requiring more hospice management, and the family often needs additional psychosocial support. Our analysis of the 2002 patient-specific visit data showed that patients with a length of stay of 2 weeks or less had a higher mean number of visits per day than patients with a length of stay greater than 2 weeks. Hospice officials we spoke with stated that patients who are in hospice care a short time are relatively more costly on a per diem basis because there are fewer days of lower visit frequency to balance the higher costs of the days with more visits at the beginning and end of the stay. In 1983, the Medicare hospice per diem payment amounts accounted for the variation in daily hospice costs because they were based on the mean daily costs incurred by the hospices in the demonstration over a mean hospice stay of 70 days. However, hospice stays are considerably shorter now; the mean length of stay was 50 days in 2001. Mean daily costs may now be very different because of the change in the length of stay. No data are available, however, to compare costs at different points during a stay or for stays of different lengths. Hospice officials we spoke with raised concerns about some of the policy requirements for particular payment categories, although our analysis of the available data could not confirm their concerns. For example, to bill for CHC, Medicare requires that a nurse provide at least half of billed CHC hours. Hospice officials stated that this could restrict the hospice’s ability to provide the most appropriate care when a social worker was a more appropriate caregiver than a nurse. The officials were also concerned that the 8-hour minimum required for billing CHC payment, counted from midnight of one day until midnight of the next, could restrict their ability to bill for CHC. For example, if a patient dies in less than 8 hours or the hospice provides 8 hours of services over 2 calendar days, the hospice must bill for RHC. Our analysis of 2001 Medicare hospice claims indicated that the mean number of hours provided on a CHC day was 18 hours, considerably above the 8-hour minimum. Similarly, our analysis of the 2002 patient-specific visit data from one large, freestanding hospice showed a mean of 20 hours provided on each CHC day. Therefore, instances of continuous care hours that fall just short of 8 hours, for which a hospice cannot bill CHC hours, do not occur often based on the patient-specific visit and claims data. Hospice officials we spoke with also stated that the statutory requirement that respite care be provided in an inpatient setting might hinder its use. Specifically, they stated that while primary caregiver respite is important, enabling patients to remain at home rather than moving them to an inpatient facility is also important; primary caregivers may not take respite in order to avoid moving the patient to an inpatient facility. Few hospices we spoke with currently provide home respite care for extended periods. They said this is largely because the costs are higher than the RHC payment rate, which is the payment category the hospices must bill for these services. Data related to home respite care are not available, although it is likely that the costs of providing 24 hours of home respite care would be higher than RHC costs. According to our analysis of data from the regional home health intermediaries, the contractors responsible for processing and paying Medicare hospice claims, less than 2 percent of all hospice providers reached the annual aggregate payment cap each year from 1999 through 2002. In 1982, the Congress required HCFA to calculate a cap that limited a hospice’s total payments to a specific per-patient amount based on the Medicare costs incurred for patients with cancer during the last 6 months of life. However, a subsequent law enacted before the hospice benefit was implemented set a per-patient cap amount that was not based on the cost data; for the 12 months beginning November 1, 2003, the cap was $19,635.67 per Medicare patient. The cap is intended to ensure that payments for hospice care do not exceed what Medicare would have spent if patients had been treated in a traditional setting, such as a hospital. However, it affects few hospices, and therefore may not represent a meaningful limit. Hospice officials we spoke with who discussed the cap said it did not affect them. CMS has not evaluated the hospice per diem payment rates and methodology since they were developed to determine the relationship between payments and costs and whether the per diem methodology is consistent with current patterns of care. There are several indications that hospice payments may not be appropriately distributed across days of care or types of providers. The type of care provided during a hospice stay appears to be different than when the hospice per diem payment rates and methodology were developed. Comprehensive data are not available, however, to evaluate the number of visits or costs of services provided during a Medicare hospice stay. While our analysis of the limited data available indicates that the overall Medicare payment rate across all payment categories was above estimated costs, IRC costs were considerably above the payment rate. Further, small freestanding hospices had substantially higher average per diem costs than other hospices. As a result, a comprehensive analysis of patient-specific data may show that modifications to the hospice payment methodology are warranted. Because the payment rates for the four hospice payment categories, the per diem methodology, and the cap are set by law, CMS’s ability to make modifications to the payment approach is limited. We recommend the following three actions. First, we recommend that the Administrator of CMS collect comprehensive, patient-specific data on the visits and services being delivered by hospices and the costs of these services. Second, using these data, the Administrator should determine whether the hospice payment methodology and payment categories need to be modified, including any special adjustments for small providers. Third, the Administrator should implement those modifications that would not require a change in Medicare law and submit a legislative proposal to the Congress for those that do. We received written comments on a draft of this report from CMS (see app. II). We also received oral comments from two groups representing industry organizations, the Hospice Association of American (HAA) and the National Hospice and Palliative Care Organization (NHPCO), as well as from the large, for-profit hospice that provided the patient-specific visit data. In commenting on a draft of this report, CMS stated that it agreed with our recommendations and intends to use our findings to supplement and reinforce preliminary evaluations the agency has made and future studies that are planned. In responding to our recommendation that it collect comprehensive, patient-specific data on hospice visits and services and the costs of these services, CMS stated that it recognized the need for this type of analysis. It stated that collection of these data would require additional research funding, and it is uncertain when such funding would be available. CMS noted that it has initiated efforts to collect data on costs with the recent establishment of the hospice cost reports. CMS stated that it hoped the recommendations in our report could help the agency in developing a comprehensive research strategy for the hospice benefit. In responding to our recommendation that CMS determine whether the hospice payment methodology and payment categories need to be modified, including any adjustments for small providers, CMS agreed that the methodology implemented in 1983 was based on a delivery model that may have changed since that time. It concurred that the methodology should be reevaluated to determine its current appropriateness. It again stated that research funding is limited. CMS agreed that the costs of drugs and other therapies, the number of hospice beneficiaries with noncancer diagnoses, and the mean length of stay have all changed since 1983. CMS stated that we did not demonstrate in the draft report that the provision of these and other therapies have increased the cost of providing care beyond the present payment. In the draft report, we stated that there may be problems with the distribution of hospice payments, but that comprehensive data are not available to evaluate the number of visits or costs of services provided during a Medicare hospice stay. As noted in the draft report, the overall payment rate across all types of care is higher than our estimate of hospices’ overall costs. In its comments, CMS also raised concerns that we implied that payment methodology changes be made for small hospices before CMS collects comprehensive data. We have clarified our conclusion to indicate the need for comprehensive, patient-specific data on the visits and services delivered by hospices and the costs of these services to inform any changes to the payment methodology. In response to our recommendation that CMS should submit a legislative proposal to the Congress to implement those modifications that would require a change in Medicare law, CMS stated that should it determine changes are necessary, it would evaluate those changes as part of its overall legislative strategy. CMS also made technical comments, which we incorporated where appropriate. The external reviewers generally agreed with our findings and recommendations. Comments on specific portions of the draft report centered on two areas: our scope and methodology and the hospice payment methodology. Regarding our scope and methodology, HAA and NHPCO were concerned that we based our findings on Medicare freestanding hospice cost reports that had not been audited. The large, for-profit hospice noted that the cost report is complex and that hospices’ accounting systems are not generally compatible with its structure. Similarly, HAA and NHPCO stated that hospices may not have had sufficient experience with completing the cost reports at the time of our review. NHPCO stated that our exclusion of hospice cost reports with fewer than 11 total patients or an average of less than 1 patient per day might have excluded a substantial number of cost reports. In addition, HAA and NHPCO recommended that we include bereavement counseling costs in our per diem cost calculation. They stated that although Medicare is precluded from paying hospices for bereavement counseling, it is a required service, and excluding it from the per diem cost calculation may misrepresent the amount by which payment rates exceeded hospice costs. Regarding reviewers’ concerns about our use of unaudited cost reports, BBRA directed us to examine hospice cost factors. Information on these factors is available only from cost reports, which CMS has not audited. As stated in the draft report, we assessed the reliability of the cost reports by comparing descriptive statistics calculated using the cost reports with those calculated using the Medicare hospice claims, and found the data suitable for our purposes. Regarding reviewers’ concerns about data we excluded from our analysis, we excluded 51 of 992, or 5 percent, of freestanding hospice cost reports in 2000, and 48 of 975, or 5 percent, of freestanding hospice cost reports in 2001, because they had fewer than 11 total patients or an average of less than 1 patient per day. We excluded these cost reports because we believe that these hospices either had too few patients to be representative of all hospices, or may have been reporting data incorrectly. We do not believe that these represent substantial numbers of cost reports and consider our exclusion criteria appropriate. Concerning the comments that we should include bereavement costs in our per diem cost calculation, as stated in the draft report, we included only Medicare-reimbursable costs in our calculations. If Medicare cannot, by law, pay hospices for bereavement services, it is inappropriate to include them in a per diem cost that is compared to a payment rate that is not designed to cover these costs. In 2001, in comparison to total Medicare-reimburseable costs, bereavement costs were small; they were equal to less than 2 percent of total Medicare- reimburseable costs. Reviewers also commented on the hospice payment methodology. NHPCO stated that costs on the cost report may not reflect the provision of all services that could potentially be provided because hospices may manage their costs to more closely approximate the per diem rate. Although the provision of additional services may be warranted, hospices cannot pay for them and therefore do not provide them. HAA and NHPCO stated that instances of CHC provision that fall close to 8 hours may not seem to occur often because hospices avoid providing CHC if they know they will not be able to provide at least 8 hours. However, HAA and NHPCO also stated that data to determine whether this is the case are not available. Regarding industry comments on hospice costs and the hospice payment methodology, we acknowledge that hospices may manage their costs to closely approximate the per diem rate, and that hospices may not provide CHC if they know they will not be paid for that level of care. Data are not available to evaluate whether either of these situations occur. Reviewers also made technical comments, which we incorporated where appropriate. We are sending copies of this report to the Administrator of CMS and appropriate congressional committees. The report is available at no charge on GAO’s Web site at http://www.gao.gov. We will also make copies available to others on request. If you or your staffs have any questions, please call me at (202) 512-7119 or Nancy A. Edwards at (202) 512-3340. Other major contributors to this report include Beth Cameron Feldpush, Joanna L. Hiatt, and Gordon W. Richmond. To examine hospice costs and Medicare payments, we used 2000 and 2001 Medicare hospice cost reports, the financial documents that hospices submit annually to the Centers for Medicare & Medicaid Services (CMS), and 2000 and 2001 Medicare hospice claims data, bills submitted by hospices to receive Medicare payment. We also used proprietary 2002 patient-specific visit data from a large for-profit hospice, which has been collecting these data for its internal use since 1994. We interviewed officials from CMS and one regional home health intermediary, a contractor responsible for processing and paying Medicare hospice claims, in addition to officials from AARP, the Hospice Association of America, the National Hospice and Palliative Care Organization, and the Visiting Nurse Associations of America. We also spoke with representatives from 18 hospices, several national independent and academic hospice researchers, and two physicians who provide hospice care. Finally, we conducted a site visit to a freestanding hospice with an inpatient unit. To assess the reliability of the cost report data, we compared descriptive statistics calculated using the cost reports with those calculated using the Medicare hospice claims data. Because hospices began submitting cost reports in 1999, we also compared our calculations from the 2000 cost reports to those from the 2001 cost reports to ensure that hospices had provided consistent data. To assess the reliability of the claims data, we compared descriptive statistics calculated using the claims with statistics published by CMS. To assess the consistency of the 2002 patient-specific visit data, we verified that the distribution of visits in the 2002 data was similar to the distribution of visits in 1997 and 1999. In addition, before releasing these data to us, the hospice performed quality assurance edits, which consisted of confirming that the data provided to us were identical to the data in its database for more than 20 randomly selected patients. Finally, we calculated descriptive statistics and compared them with statistics for all hospices calculated using the Medicare hospice claims. We determined that the cost report, claims, and patient-specific data were all suitable for our purposes. The 2000 and 2001 hospice cost reports were the most recent data available at the time of our analysis. The Medicare payment methodology is the same for freestanding and facility-based hospices; however, we confined our analysis to cost reports of freestanding hospices. We excluded hospital- based and home health agency-based hospices because we found that their per diem costs were generally much lower than those of freestanding hospices, which may result from decisions made by these providers in allocating overhead costs between the hospital or home health agency and the hospice. For freestanding hospices, the only costs incurred are for delivering hospice care to patients. We excluded freestanding cost reports that reported no or low Medicare utilization, those that had cost reporting periods of fewer than 10 or greater than 14 months, and those outside the 50 states or District of Columbia. We also excluded cost reports that had fewer than 11 total patients or an average of less than 1 patient per day, those with no costs, and those reporting costs outside three standard deviations of the mean. Our final sample included 82 percent of all freestanding hospice cost reports in 2000 and 80 percent in 2001. We calculated freestanding hospices’ total Medicare-reimburseable costs by subtracting nonreimburseable costs, such as bereavement and fund- raising, from total costs. To obtain average per diem costs, we summed total Medicare-reimburseable costs across all providers and divided by total hospice days across all providers. In addition, because of the cost report design, certain inpatient respite care (IRC), general inpatient care (GIC), and physician costs may be included in our estimate of combined routine home care (RHC) and continuous home care (CHC), or home care, costs. As a result, home care costs may be overestimated, which would result in our understating the amount by which the unadjusted home care payment rate exceeds average home care per diem costs. Because of the way cost centers are defined on the cost reports, the costs of IRC and GIC may be underestimated. We based the size of a hospice in each year on the number of days of care it provided that year. Small hospices were those that reported total days of care less than the 25th percentile of all hospices’ total days of care. Medium hospices were those that reported total days of care equal to or greater than the 25th percentile and less than or equal to the 75th percentile of all hospices’ total days of care. Large hospices were those that reported total days of care greater than the 75th percentile of all hospices’ total days of care. We defined a hospice as urban if it was located in a county that was in a metropolitan statistical area and as rural if it was located in a county that was not in a metropolitan statistical area, as determined by the Office of Management and Budget as of June 30, 1999. We could not compare the 2000 and 2001 per diem costs we calculated to actual payments because hospice cost reports do not report Medicare payment information. In addition, Medicare hospice claims contain only the total payment for all services provided during the billing period, including physician services, not the payment for each payment category. Therefore, we calculated a 2000 and 2001 unadjusted payment rate that encompassed all payment categories. We did so by weighting the individual rates of the four payment categories by their respective utilization in the freestanding hospice cost reports in our final sample in each year. The costs for home care, combined RHC and CHC, are reported in aggregate on the hospice cost report. Therefore, we calculated a 2000 and 2001 unadjusted payment rate that encompassed RHC and CHC. We did so by weighting the individual rates of these two categories by their respective utilization in the freestanding hospice cost reports in our final sample in each year. In addition, we weighted the overall unadjusted payment rate and the unadjusted payment rate for each payment category to account for the different payment rates in effect during the year. The majority of freestanding hospices report costs using a calendar year reporting period, while payment rates are updated on a fiscal year basis, that is, on October 1 of each year. Therefore, during a calendar year, one payment rate is in effect from January 1 through September 30 and another from October 1 through December 31. Our unadjusted payment rates do not account for the wage adjustment Medicare applies to payments. To determine the proportion of total cost in 2001 accounted for by each service, such as nursing or home health aide services, that was included in the 1983 RHC rate, we grouped the services on the cost report into categories similar to the 1983 services, and divided by the total cost. Our estimates of the proportions of 2001 RHC costs include CHC costs because the costs of RHC and CHC are reported in aggregate on the hospice cost report. It is likely that CHC costs were a very small proportion of combined RHC and CHC costs, as CHC days accounted for just over 1 percent of total hospice days in 2001. To determine the percentage of total hospice days accounted for by each payment category and the mean CHC hours per CHC day for all hospices, we used 2000 and 2001 Medicare hospice claims data, the years that matched most closely with the cost reports used for our analysis. We excluded from our analysis patients who were younger than 20 or older than 110 years of age, who lived outside of the 50 states or the District of Columbia, and who had total hospice payments that fell below 1 day of care at the lowest wage-adjusted RHC payment rate and above 1 year of care at the highest wage-adjusted RHC payment rate. Our final sample included over 98 percent of all claims in both 2000 and 2001. To analyze the frequency and types of visits to hospice patients, we used proprietary 2002 data on Medicare hospice patients collected by a large, for-profit hospice with multiple freestanding facilities. We determined the number of visits per day and the number of nurse, home health aide, counselor, and other caregiver visits per day for all days and for days within each of the four payment categories. We also analyzed whether there were differences in the number of visits per day provided by patient length of stay, patient residence, diagnosis, number of secondary conditions, and age and determined the number of visits in the first and last week of a stay and for the remaining days of a stay. We conducted our work from January 2003 through October 2004 in accordance with generally accepted government auditing standards. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
The Medicare hospice benefit provides care to patients with a terminal illness. For each patient, hospices are paid a per diem rate corresponding to one of four payment categories, which are based on service intensity and location of care. Since implementation in 1983, the payment methodology and rates have not been evaluated. The Medicare, Medicaid, and SCHIP Balanced Budget Refinement Act of 1999 directed GAO to study the feasibility and advisability of updating Medicare's payment rates for hospice care. In this report, GAO (1) compares freestanding hospices' costs to Medicare payment rates and (2) evaluates the appropriateness of the per diem payment methodology. Because of Medicare data limitations, it was not possible to compare actual payments to costs or examine the services provided to each patient. Using Medicare cost reports from freestanding hospices, GAO determined that the per diem payment rate for all hospice care was about 8 percent higher than the estimated average per diem cost of providing care in 2000, and over 10 percent higher in 2001. However, the relationship between payment rates and costs varied across the payment categories and types of hospices. For all hospice care provided in the home, which accounted for about 97 percent of care in 2001, GAO estimates that the per diem payment rate was almost 10 percent higher than average per diem costs in 2000, and over 12 percent higher in 2001. Small hospices, however, had higher estimated average per diem costs than medium or large hospices overall and for each of the four per diem payment categories in 2001. GAO's analysis indicates that the hospice payment methodology, with rates based on the historical mix and cost of services, a per diem amount that varies only by payment category, and a cap on total Medicare payments, may not reflect current patterns of care. For example, GAO determined that the relative costs of services, such as nursing care, provided during routine home care (RHC) have changed considerably since the rates were calculated. Using limited patient-specific hospice visit data, GAO found that more visits were provided during the first, and especially last, week of a hospice stay than during other times in the stay. Finally, few hospices reached the payment cap, which was intended to limit Medicare hospice spending.
RUS, established by the Federal Crop Insurance Reform and Department of Agriculture Reorganization Act of 1994 (P.L. 103-354, Oct. 13, 1994), administers the electricity and telecommunications loan programs that formerly were operated by the Rural Electrification Administration (REA).As part of a general program of unemployment relief, REA was first established by executive order in 1935 to provide loan funds to support the electrification of rural America. At that time, most utilities served high-density areas and did not extend lines to farmers and other rural residents. In 1936, REA was given the statutory authority to operate the electricity loan program, and in 1939, REA became part of USDA. In 1949, REA was authorized to lend funds for telephone services in rural areas. In recent years, RUS has made or guaranteed an average of about $1.4 billion per year in loans to help borrowers develop, upgrade, or expand their electricity and telecommunications systems. As of June 30, 1997, the outstanding principal on RUS’ electricity and telecommunications loans totaled about $36 billion. The Rural Electrification Act of 1936, as amended (7 U.S.C. 901 et seq.), referred to as the RE Act, provides the basic statutory authority for the electricity and telecommunications programs. RUS makes electricity loans—both direct and guaranteed—primarily to electric cooperatives. It makes direct loans to construct and maintain the distribution facilities that provide electricity to users. It also provides guarantees on loans that are made by other lenders for financing the construction, repair, and improvement of electricity generating and transmission facilities. Nearly all borrowers with electricity loans are nonprofit cooperatives. RUS’ direct loans include both hardship rate loans and municipal rate loans. Hardship rate loans are made to borrowers that meet the following criteria: (1) Their customers have below-average per capita income or below-average median household income for the state, and (2) they have a relatively high cost for providing service, as indicated by a high average revenue per kilowatt-hour sold. Hardship rate loans have a 5-percent interest rate. Generally, municipal rate loans are made to qualified borrowers that do not meet the criteria for hardship rate loans. Municipal rate loans have an interest rate that is tied to an index of municipal bond rates; the rate can change quarterly. All electricity loans on which RUS has provided repayment guarantees in recent years have been made by the Treasury’s Federal Financing Bank (FFB). These loans have an interest rate equal to the Treasury’s cost of money plus one-eighth of 1 percent. While RUS can also guarantee electricity loans made by commercial lenders, it has not done so in recent years because borrowers have applied for loans from the FFB, which has lower interest rates than those available from commercial lenders. RUS makes telecommunications loans—both direct and guaranteed—primarily to commercial telephone companies and cooperatives to build and improve telephone and telecommunications facilities and services. These loans are also made for advanced telecommunications facilities and services, such as fiber-optic cabling, digital-switching equipment, and educational television applications. About 72 percent of the borrowers with telecommunications loans are for-profit companies, while the others are mostly nonprofit cooperatives. RUS’ direct loans are hardship rate loans and cost-of-money rate loans. Hardship rate loans are made to borrowers that meet the following criteria: (1) an average of four or fewer customers per mile of telecommunications line in their current service areas, (2) income that is 1 to 3 times more than their interest expenses, and (3) an average of 17 or fewer customers per mile in the area to be served by the project to be funded with the loan. These loans have a 5-percent interest rate. Generally, cost-of-money rate loans are made to borrowers that do not qualify for hardship rate loans and that have an income of 1 to 5 times more than their interest expenses; these loans have an interest rate that matches USDA’s cost of money, which currently exceeds the rate for hardship rate loans. RUS also administers the Rural Telephone Bank (RTB) loan program, in which direct loans are made concurrently with cost-of-money rate loans. RTB loans have an interest rate that matches RTB’s cost of money. RUS also provides guarantees on loans made to commercial telephone companies and cooperatives. As with electricity loans, all guaranteed telecommunications loans in recent years have been made by the FFB, at an interest rate equal to the Treasury’s cost of money plus one-eighth of 1 percent. RUS guaranteed only FFB loans because borrowers applied for FFB loans rather than for commercial lenders’ loans. During fiscal year 1994 through the first three-quarters of fiscal year 1997, RUS made or provided guarantees on 926 electricity and telecommunications loans; these loans totaled about $4.9 billion. Table 1 shows the total number and amount of loans made in each program during this period. According to RUS’ reports, the outstanding principal owed on electricity and telecommunications loans totaled about $36 billion as of June 30, 1997. Table 2 shows the amount owed in each program. RUS’ electricity and telecommunications loans are intended to assist in the development of the nation’s rural areas. Modifying certain aspects of the electricity and telecommunications loan programs could aid in reaching this goal while reducing the government’s cost. First, lending practices could be modified to ensure that the loans benefit areas with low populations, thereby more effectively using the agency’s limited loan funds. Currently, borrowers serving areas that are heavily populated sometimes receive loans. Second, RUS’ subsidized direct loans could be focused on borrowers that are not capable of using their own resources or of obtaining loans from the private sector to fund their utility projects. Targeting subsidized direct loans to borrowers in need of federal assistance could result in the more effective use of the loan funds.Currently, financially healthy borrowers sometimes receive these subsidized loans. Finally, a graduation program could be instituted to attempt to move RUS’ financially viable borrowers with direct loans to commercial sources of credit. This action could allow the agency to reduce the interest and administrative-servicing expenses that it now incurs. A fundamental concept of both the electricity and the telecommunications loan programs is that funds are to be provided to borrowers for delivering service to sparsely populated rural areas. RUS’ regulations require borrowers in both programs to establish that they serve rural areas when they apply for their first loan. Generally, for a new borrower, the population threshold is less than 2,500 for the electricity program and no more than 5,000 for the telecommunications program. However, in both programs, subsequent loans for service can be made without the borrower’s having to meet the initial test of serving a rural area. In addition, as the RE Act allows, telecommunications loans can be made for service to nonrural areas when that service is considered incidental to providing service to a rural area. We found that RUS sometimes makes loans to existing borrowers for providing service to areas where the population exceeds original thresholds for rural areas. For example, an electricity distribution borrower that first received a loan in 1945 received another loan in 1996; in the year prior to receiving this recent loan, the borrower had almost 140,000 customers. This borrower provided service to customers in five counties; one county had about 55,800 residential customers, and another had about 45,100 residential customers. None of these counties was classified as completely rural by USDA’s Economic Research Service—all contained an urban population that exceeded 2,500. Furthermore, two of the counties were within a metropolitan area having a population of at least 1 million. Likewise, a telecommunications borrower that first received a loan in 1964 received another loan in 1996; this borrower had about 49,600 residential customers and about 13,700 business subscribers in the year prior to receiving the latest loan. This borrower provided service to customers located in one county, which also was identified by the Economic Research Service as being a county within a metropolitan area having a population that was between 250,000 and 1 million people. While we did not evaluate the population density of the areas served by all of RUS’ electricity and telecommunications borrowers, we did examine customer service statistics as an indicator of population density. We found that 71 electricity distribution borrowers that received loans during calendar years 1994 through June 30, 1997, had more than 25,000 customers; 20 of these borrowers had more than 50,000 customers. Nine of the telecommunications borrowers had more than 25,000 customers; five of these borrowers served a customer base of more than 50,000. (See table 3.) Unlike the requirements for some other USDA rural credit programs—such as the water and waste disposal, farm, single-family housing, and community facilities loan programs—the RE Act does not require electricity and telecommunications loan applicants to demonstrate that they cannot obtain credit from other lenders before applying for a RUS loan. The act also does not preclude a financially healthy borrower from receiving a RUS loan. As a result, RUS’ loans are sometimes made to financially healthy borrowers that may not need federal assistance to fund their utility projects. In addition, some financially healthy borrowers obtain municipal rate loan funds at interest rates lower than the rate available on hardship rate loans. The RE Act does not address the effect of an applicant’s financial health on the applicant’s eligibility to obtain loans in either program. For telecommunications loans, however, the relationship between income and interest expenses influences the type of loan that an applicant may qualify to receive. The RE Act does state that a loan cannot be denied or reduced on the basis of a borrower’s level of general funds. However, a provision in 7 U.S.C. 930—a congressional policy declaration on RUS’ loan programs that is not part of the RE Act—states that the agency’s electricity and telecommunications borrowers should be encouraged and assisted in satisfying their credit needs either internally or through other credit sources. Many electricity borrowers that obtained loans during calendar years 1994 through June 30, 1997, had favorable financial characteristics.Specifically, almost 56 percent of the borrowers had equity—total assets less total liabilities—of $10 million or more at the end of the year prior to receiving the loans, and another 43 percent had equity of between $1 million and $10 million. In addition, about 40 percent of the borrowers made a profit (net income) of $1 million or more in the year prior to receiving the loans, and another 55 percent made a profit of between $100,000 and $1 million. (App. I provides detailed information on electricity loans to borrowers by various incremental ranges of equity and profit.) The electricity borrowers also had generally favorable current, debt-to-asset, and times-interest-earned ratios (TIER). The current ratio is a measure showing the extent to which a borrower has sufficient current assets to cover its current liabilities. About 41 percent of the borrowers had a current ratio of 2 or more times, indicating that their level of current assets was at least twice the level of their current liabilities. The debt-to-asset ratio reflects a borrower’s debt as a percentage of its assets—it shows the extent to which a borrower has sufficient assets to cover all of its debt. Eighty-six percent of the borrowers had a generally favorable debt-to-asset ratio of 70 percent or less, including 7 percent whose ratio was no more than 40 percent. The TIER shows the extent to which a borrower can pay its annual interest expenses from its net income. Sixty-two percent of the borrowers had a TIER of 2 or more times, which reflects their having at least twice the level of net income as interest expenses. (App. I also provides detailed information on electricity loans to borrowers by various incremental ranges of these three ratios.) The following are examples of electricity loans to borrowers that had high levels of equity and/or profit. A distribution borrower that received a $4.5 million loan in 1997 had equity of $48.7 million, or almost 11 times the loan amount, at the end of 1996; this borrower also had $5.1 million in profit in 1996. Another distribution borrower had over 3 times more profit than the RUS loan amount. Specifically, this borrower received a $630,000 loan in 1994 and had a profit of $2.1 million in 1993; this borrower also had $12.2 million in equity at the end of 1993. Likewise, a power supply borrower that received a $5.3 million loan in 1995 had $9.1 million in profit in 1994; this borrower had $226.4 million in equity at the end of 1994. Many telecommunications borrowers that obtained loans during calendar years 1994 through June 30, 1997, also had favorable financial characteristics. Specifically, about 24 percent of the borrowers had equity of $10 million or more at the end of the year prior to receiving the loans, and another 65 percent had equity of between $1 million and $10 million. In addition, about 29 percent of the borrowers made a profit of $1 million or more in the year prior to receiving the loans, and another 61 percent made a profit of between $100,000 and $1 million. Furthermore, about 80 percent of the borrowers had a current ratio of 2 or more times, 83 percent had a debt-to-asset ratio of 70 percent or less, and 87 percent had a TIER of 2 or more times. (App. I provides detailed information on telecommunications loans to borrowers by various incremental ranges of equity and profit, as well as these three ratios.) The following are examples of telecommunications loans to borrowers that had high levels of equity and/or profit. A borrower that received a $1.1 million loan in 1995 had equity of about $9.2 million, or more than 8 times the loan amount, at the end of 1994; this borrower also had $800,000 in profit in 1994. Another borrower that received a loan of $10.4 million in 1994 had $11.7 million in profit in 1993; this borrower also had $82.9 million in equity at the end of 1993. RUS incurs a considerable expense in providing direct loans to financially healthy borrowers. The principal cost is associated with the interest rate subsidies—the interest costs associated with loans made at rates below the rate at which RUS borrows from the Treasury. Specifically, RUS’ estimated total subsidy costs (not including its administrative costs) on direct electricity and telecommunications loans made during fiscal years 1994 through 1996 totaled $227.5 million: $49.6 million on hardship rate loans and $148.9 million on municipal rate loans in the electricity program (many more municipal rate loans than hardship rate loans were made) and $29 million on hardship rate loans in the telecommunications program. We did not quantify the portion of this estimated cost that relates to interest rate subsidies and the portion that relates to default costs, fees, and other costs. However, hardship rate loans in both programs are made at interest rates that are less than RUS’ cost of acquiring funds from the Treasury. The interest rates on municipal rate loans are based on the rates in effect for municipal obligations of similar maturities; the rates on these loans are also less than RUS’ cost of borrowing. In addition, RUS has had few repayment problems with its direct loans. Finally, RUS estimated the subsidy costs on the cost-of-money rate loans made during this 3-year period at a far lower amount—$0.1 million. These loans do not have an interest rate subsidy because they are made at rates that match RUS’ cost of borrowing. Currently, some financially healthy borrowers are obtaining municipal rate loan funds at interest rates that are less than the 5-percent rate available on hardship loans. More specifically, after RUS approves a loan application, a borrower obtains loan funds by taking advances (drawdowns) against the loan. All advances on hardship rate loans bear interest at 5 percent. However, each advance on municipal rate loans bears interest at a rate based on an index of municipal bond rates, which can change each calendar quarter. At the beginning of each quarter, RUS publishes a schedule of the interest rates applicable to advances taken during the quarter. A borrower may take up to eight separate advances of funds on an approved municipal rate loan. For each advance, the borrower selects an interest rate term, which is the period of time used to determine the interest rate. The minimum interest rate term is 1 year, and the maximum is the number of years corresponding to the final maturity date of the loan. By selecting shorter interest rate terms, borrowers can obtain interest rates on advances for municipal rate loans that are less than 5 percent. As a result, a borrower with a municipal rate loan can borrow at a lower cost than can a borrower with a hardship rate loan. Specifically, interest rates of less than 5 percent were available on advances for municipal rate loans in 14 of the 15 quarters between January 1, 1994, and September 30, 1997. The lowest rate available in the 15th quarter was 5 percent. As table 4 shows, the interest rates in effect for July 1, 1997, through September 30, 1997, included a range of 3.875 percent for a 1-year interest rate term to 4.875 percent for a 9-year interest rate term. At the end of the interest rate term selected by the borrower for each advance, the borrower has the option of repaying the remaining portion of the advance or rolling it over for a new interest rate term. If the borrower rolls over the remaining amount, depending on the interest rates in effect at that time, the borrower may again obtain an interest rate of less than 5 percent by selecting another short term. However, the borrower runs the risk that interest rates may have increased from the rate initially selected. Many borrowers that took advances on municipal rate loans obtained interest rates of less than 5 percent. Specifically, 115 borrowers took a total of 210 advances with interest rates of less than 5 percent on municipal rate loans approved during fiscal years 1994 through June 30, 1997. The total amount of these advances was $242 million. For example, a borrower with about 29,500 customers had $25.1 million in equity at the end of 1994. In February 1995, RUS approved a $24.7 million loan, and in August 1995, the borrower took a $12.4 million advance. The borrower selected a 5-year interest rate term and obtained a 4.625-percent interest rate. Another borrower with about 15,400 customers had $19.6 million in equity at the end of 1995. In April 1996, RUS approved an $11 million loan, and in February 1997, the borrower took a $9.4 million advance. This borrower selected a 1-year interest rate term and obtained a 3.875-percent interest rate. While RUS’ water and waste disposal loan program has graduation requirements, the RE Act does not require RUS to attempt to move financially healthy direct loan borrowers in the electricity and telecommunications programs to commercial credit sources. RUS officials told us that they have not instituted a graduation procedure because the RE Act is silent on this issue. Because graduation is not an integral part of RUS’ operation of these two programs, some borrowers may have direct loans longer than needed and are therefore able to take advantage of the favorable terms that exist with such loans. As a result, RUS continues to incur interest and other administrative expenses in servicing the accounts of its financially healthy borrowers. Many electricity and telecommunications borrowers with outstanding direct loans as of December 31, 1996, had favorable financial characteristics indicating that they may be viable candidates for having the commercial sector refinance their RUS debt. Specifically, about 39 percent of the borrowers had equity of $10 million or more at the end of calendar year 1996, and another 57 percent had equity of between $1 million and $10 million. In addition, 36 percent of the borrowers made a profit of $1 million or more in 1996, and another 57 percent made a profit of between $100,000 and $1 million. For example, in the electricity program, one distribution borrower with about $146,000 in outstanding direct loan debt had $27.6 million in equity at the end of 1996 and had made $1.7 million in profit in 1996. This borrower also had a current ratio of 2.3, debt-to-asset ratio of 7 percent, and TIER of 534.6. (These three ratios were previously discussed for the electricity borrowers that received loans during calendar years 1994 through June 30, 1997.) In the telecommunications program, a borrower with about $1.8 million in outstanding direct loans had over $23.4 million in equity, $4.2 million in profit, and a current ratio of 11.7, debt-to-asset ratio of 11 percent, and TIER of 31.2. Although RUS has no systematic graduation program, borrowers with direct electricity loans may initiate graduation on their own. That is, the RE Act allows a borrower to prepay its outstanding direct electricity loan at a discount—the discounted prepayment amount is the present value of a borrower’s outstanding debt. Therefore, borrowers can graduate by seeking and obtaining other financing. The act also provides that a borrower that prepays at a discount cannot obtain another direct loan from RUS for 10 years from the prepayment date. If eligible, however, such borrowers could obtain a guaranteed loan. RUS’ records show that during fiscal years 1994 through June 30, 1997, a total of 107 borrowers prepaid their direct electricity loans at a discount. Their total outstanding debt was more than $1.5 billion, the prepayment amount was about $1.3 billion, and the discount was about $239 million. Other USDA rural credit programs generally have graduation procedures. For example, RUS’ regulations provide for periodic reviews of financial information submitted by direct loan borrowers in its water and waste disposal loan program to determine if the borrowers are likely graduation candidates. When graduation appears possible, a borrower or RUS may submit financial information to other lenders to see if they would refinance the borrower’s outstanding direct loan. From a financial standpoint, RUS has successfully operated the telecommunications loan program, but the agency has had, and continues to have, significant financial problems with the electricity loan program. Modifying certain aspects of both loan programs could reduce the agency’s vulnerability to losses on new loans. First, loan and indebtedness limits could be imposed. Currently, the loan programs generally lack limits, and, as a result, some borrowers have obtained large-dollar loans and accumulated large levels of debt. Second, the repayment guarantee that RUS places on loans made by other lenders could be reduced so that the lenders participating in RUS’ programs would share in the risk of the loans they make. Currently, RUS guarantees 100 percent of other lenders’ loans. However, because all guaranteed loans in recent years have been made by the FFB, the risk to the federal government as a whole would not be reduced if the FFB continues to be the sole source of loan funds. Finally, policies could be strengthened to ensure that additional loans are not made to borrowers that are delinquent or that have caused RUS prior losses. While RUS did not make or guarantee loans to such borrowers during the period covered by our review, there are no policies to prevent loans to such borrowers from being made in the future. During fiscal years 1994 through June 30, 1997, RUS wrote off the debt of five electricity loan borrowers; these write-offs totaled more than $1.7 billion. In February 1994, RUS wrote off about $14 million of debt for a distribution borrower. In addition, RUS wrote off debt for four power supply borrowers: about $52 million in August 1995, $982 million in September 1996, $502 million in October 1996, and $165 million in June 1997. The majority of these loan losses resulted from investments in nuclear power plants that were either constructed at costs substantially higher than initial projections or abandoned during the construction phase. No borrowers’ telecommunications loans were written off during this period. Additionally, a small number of borrowers still in the electricity program are experiencing serious financial difficulties. These difficulties expose RUS to the risk of more write-offs in the future. As of June 30, 1997, RUS had three borrowers that were delinquent (at least 30 days past due) on scheduled loan payments totaling over $1.2 billion: A distribution borrower was past due on payments of $8.5 million, and two power supply borrowers were past due on payments of $55.2 million and about $1.2 billion, respectively. At the end of June 1997, RUS also had 10 other borrowers—all power supply borrowers—that were experiencing financial distress: They were in bankruptcy, were likely to default on repaying the loans, or had formally requested debt relief. These borrowers owed a total principal of about $7.7 billion on their RUS loans: Six owed between $100 million and $500 million each, two owed between $500 million and $1 billion each, and two owed more than $1 billion each. As we reported in April 1997, these borrowers’ problems generally stem from their investments in nuclear-generating plants that were completed late and over budget or in coal-fired generating plants that were built to satisfy anticipated industrial growth that did not occur. On the other hand, no borrowers with outstanding RUS telecommunications loans were delinquent or otherwise financially stressed. Furthermore, our April 1997 report stated that RUS’ electricity loan portfolio faces the possibility of additional financial stress because of increasing competition among the providers of electricity. Competition in the wholesale electricity market is increasing as a result of legislation that was enacted in the early 1990s, such as the Energy Policy Act of 1992 (P.L. 102-486, Oct. 24, 1992). The act encouraged additional wholesale suppliers to enter the electricity market and provided greater access to other utilities’ transmission lines. Additionally, the industry in which RUS’ telecommunications loan borrowers operate is changing. In particular, there have been rapid advances in technology and changes in the legislative environment, such as the Telecommunications Act of 1996 (P.L. 104-104, Feb. 8, 1996). These factors could work to either the betterment or the detriment of the borrowers that have telecommunications loans. The RE Act does not limit the amount of an electricity or a telecommunications loan that a borrower may receive or the amount of outstanding indebtedness that a borrower may accumulate through multiple loans. RUS’ vulnerability to losses on future loans in the operation of these two credit programs could be reduced if limits were imposed. RUS has set loan limits only for direct telecommunications loans. Specifically, the maximum amount of a hardship rate telecommunications loan to any one borrower is the lesser of (1) up to 10 percent of the annual loan appropriation or (2) $7 million, an operational level set administratively by the agency in fiscal year 1996. RUS set this maximum amount in order to distribute its limited funds among the largest number of qualified borrowers. Similarly, to ensure that its cost-of-money rate and RTB loan funds are broadly dispersed, on September 5, 1997, RUS published a change to its regulations, providing a limit of 10 percent of the annual loan appropriation to any single borrower. This change became effective on October 6, 1997. RUS officials in both programs told us that they had not set limits for the other loan types—all electricity loans and guaranteed telecommunications loans—or limits on the amount of debt that a borrower can accumulate because the RE Act does not require limits. Electricity program officials added that they believe they need to be able to provide an applicant with the level of funds needed to support the proposed project. The general lack of loan limits has allowed RUS to make large-dollar loans to some borrowers. Specifically, while most electricity loans approved during fiscal years 1994 through June 30, 1997, were for less than $10 million, a total of 77 loans, or about 14 percent of the loans, were for $10 million or more. These 77 loans totaled about $1.6 billion, or 51.7 percent of the amount for all loans approved during the period. Similarly, while most telecommunications loans were made for less than $10 million, a total of 36 loans, or about 9.2 percent, were for $10 million or more. These 36 loans totaled about $653 million, or 36.5 percent of the amount for all loans approved during the period. (App. II provides detailed information on loans to borrowers by loan size.) In addition to the general absence of limits on individual loans, borrowers do not have any limits on the total amount of debt that they can accumulate through multiple loans. As a result, some borrowers owe a high dollar amount of outstanding principal. For example, 128 borrowers with outstanding direct loans in the electricity program as of June 30, 1997, each owed more than $20 million; one owed about $122 million. In the telecommunications program, 38 borrowers with outstanding direct loans each owed more than $20 million; one owed over $100 million. Loan and debt limits exist in some, but not all, of USDA’s rural credit programs. For example, USDA has loan and debt limits on its farm ownership, operating, and emergency disaster loans and on its single-family housing loans. Conversely, it has no limits on loans in other programs, such as those made in the water and waste disposal loan program. The RE Act allows RUS to guarantee repayment on electricity and telecommunications loans made by the FFB or other lenders. The act also requires that the guarantee be 100 percent. As of June 30, 1997, RUS had about $19.8 billion in guaranteed loan debt on which it has full risk exposure. Almost $14 billion of this amount was outstanding principal on original loans with RUS guarantees, and about $5.8 billion was on restructured loans. The lenders that made the loans—the FFB and a few commercial lenders—have no risk exposure. Providing a guarantee of less than 100 percent could reduce RUS’ vulnerability to losses from these two credit programs. However, because all guaranteed loans in recent years have been made by the FFB, the risk to the federal government as a whole would not be reduced if the FFB continues to be the sole source of loan funds. According to FFB officials, providing a guarantee of less than 100 percent could cause the FFB to stop making electricity and telecommunications loans because it only participates in lending programs when there is full security on its loans. Even if the guarantee remains unchanged, however, a provision in the recently enacted Balanced Budget Act of 1997 (P.L. 105-33, Aug. 5, 1997) may affect the FFB’s willingness to continue making loans to electricity and telecommunications borrowers. The act provides that the surcharge on FFB loans, which is one-eighth of 1 percent over the Treasury’s cost of borrowing, is to be deposited in the RUS account held by the Treasury and used to finance the cost of these two loan programs. FFB officials told us that this surcharge has generally offset its administrative cost of participating in RUS’ programs. The act also provides that the FFB can require RUS to reimburse it for the administrative expenses incurred that are attributable to the loans. All loans that received RUS’ guarantees during fiscal years 1994 through June 30, 1997, were made by the FFB. While the RE Act gives the borrower the option of selecting the FFB or a commercial lender, RUS officials told us that borrowers have selected the FFB because it offers lower interest rates. According to FFB officials, some borrowers also turn to the FFB because the large amount of money they need is probably more than commercial lenders would provide. While this may be true, our analysis of guaranteed loans made by the FFB and commercial lenders showed that some commercial lenders provided large-dollar loans and that the FFB made a number of small-dollar loans that could have been funded by commercial lenders. For example, as of June 30, 1997, 10 power supply borrowers had outstanding guaranteed loans from commercial lenders that had been made before the start of fiscal year 1994—six of these had received loans for more than $100 million each. In addition, even though the FFB is thought of as a high-dollar lender, RUS’ records showed that 9 of the 36 electricity loans and 20 of the 29 telecommunications loans made by the FFB during fiscal years 1994 through June 30, 1997, were for less than $5 million. USDA has less risk exposure when guaranteeing loans in other rural credit programs, such as farm ownership and operations, single-family housing, community facilities, business and industry, and water and waste disposal loans. With each of these loan programs, the maximum allowable loan guarantee is generally 90 percent. In some cases, such as RUS’ water and waste disposal loans, the guarantees are usually at 80 percent. A borrower that is delinquent on an electricity or a telecommunications loan is not prohibited by the RE Act or by RUS’ regulations from obtaining an additional loan. Likewise, a borrower that has caused RUS to incur loan losses is not prohibited from obtaining another loan. Our review of RUS’ loan approval records showed that no delinquent borrower or one that caused prior losses received loans during fiscal years 1994 through June 30, 1997. While RUS did not make or guarantee loans to such borrowers during this period, we believe that the agency’s ability to do so is an area of concern that could, if loans were made, contribute to future exposure to loss. Prohibiting loans to such risky borrowers is a way of ensuring that RUS does not add to its vulnerability in operating these two credit programs. RUS has had few delinquent borrowers and borrowers that have caused it to incur losses in recent years. Specifically, as of June 30, 1997, three electricity loan borrowers were delinquent; no telecommunications loan borrowers were delinquent. Additionally, during fiscal years 1994 through June 30, 1997, RUS wrote off the debt of five electricity loan borrowers, which resulted in losses to RUS; no telecommunications borrowers had loans written off. RUS’ electricity and telecommunications loan officials told us that a borrower has to be in good standing on its existing debts in order to obtain a RUS loan. An official in the telecommunications program said that it would be highly unlikely for an additional loan to be made to a borrower that had caused a loss because RUS would have pursued foreclosure proceedings against the borrower and would have required disposal of assets as a part of the settlement that resulted in the loss. Nonetheless, officials in both programs acknowledged that their regulations do not prohibit loans to delinquent borrowers or to those that have caused prior losses. On September 26, 1997, RUS published a change to its electricity loan regulations that, rather than denying loans to borrowers that have had debts written off, provides guidance on what such borrowers need to provide as a condition for obtaining another loan. RUS stated that in considering a loan request from a borrower whose debt had been settled, including debt written off, the borrower would be required to demonstrate evidence of financial support for the amount of the requested loan. This support could include increasing the level of the applicant’s equity or a guarantee of debt repayment, either from the applicant’s members (in the case of a power supply borrower) or from a third party. Prior to the Federal Agriculture Improvement and Reform Act of 1996 (P.L. 104-127, Apr. 4, 1996), USDA provided some loans in another rural credit program—farm loans—to delinquent borrowers and to those whose prior performance resulted in losses for USDA. However, because of concerns about the fiscal prudence of making loans to such borrowers, coupled with the high level of delinquencies and losses that USDA had experienced, the Congress enacted provisions in that act that generally prohibit farm loans to such borrowers. RUS is not the only provider of credit to rural utilities. Two commercial lenders have a significant level of lending activity for rural electricity and telecommunications purposes: (1) the National Rural Utilities Cooperative Finance Corporation (CFC) and its various affiliated lending organizations and (2) the Farm Credit System (FCS). These two commercial lenders had a combined total of $13.1 billion in outstanding principal on loans for rural electricity and telecommunications purposes as of June 30, 1997. CFC provides electricity loans to its owners, such as distribution cooperatives and power suppliers. CFC’s loans parallel RUS’ lending—that is, loans are made for financing the construction, improvement, and repair of electricity systems. Loans are also made for other purposes, such as financing operations and business activities related to the borrowers’ electricity operations, including acquiring office buildings and equipment. One of CFC’s affiliates—the Guaranty Funding Cooperative—made electricity loans to CFC’s owners for refinancing their outstanding FFB debt. Another affiliate—the Rural Telephone Finance Cooperative (RTFC)—makes loans to rural telephone systems that are eligible to participate in RUS’ telecommunications program. While RTFC finances some infrastructure development, most of its financing is for activities that RUS is not involved in, such as cellular telephone operations, or is involved in to only a limited extent, such as the acquisition of local telephone exchanges. FCS lends primarily to agricultural producers and agricultural cooperatives. However, two FCS banks—CoBank and the St. Paul (Minnesota) Bank for Cooperatives—also make loans to rural utilities. CoBank is FCS’ national bank for lending to rural utility systems and cooperatives. The St. Paul Bank, although it also has a national charter, provides similar lending to borrowers located primarily in four upper midwestern states (Michigan, Minnesota, North Dakota, and Wisconsin). Both banks provide electricity and telecommunications loans to RUS’ borrowers, rural utility systems that are eligible to borrow from RUS, and the subsidiary organizations of these borrowers or other eligible entities. As with CFC and RTFC, the loans from these banks parallel RUS’ infrastructure lending and are also made for other activities that RUS is not involved in or is involved in to only a limited extent. These lenders and their affiliated organizations had about $10.4 billion in outstanding principal on electricity loans and about $2.8 billion in outstanding principal on telecommunications loans as of June 30, 1997. As table 5 shows, CFC’s loans accounted for the greatest portion of this amount. Additionally, the information provided to us by each of these lenders shows that their electricity and telecommunications portfolios were generally financially sound. For example, less than 1 percent of CFC’s $7.8 billion electricity loan portfolio was owed by delinquent borrowers. Furthermore, since its inception in 1969 through the end of May 1997, CFC wrote off a total of $28.4 million in electricity loans. According to CFC and RTFC officials, no borrowers with RTFC telecommunications loans were delinquent, and no such loans had been written off since RTFC’s inception in 1987. CoBank and the St. Paul Bank had similar experiences. Specifically, all borrowers with electricity and telecommunications loans were current on their loan repayment. In addition, according to their officials, CoBank has not written off any electricity or telecommunications loans in recent years, and the St. Paul Bank has never written off such a loan. RUS has had a long and successful role in contributing to the development of the utility infrastructure in the nation’s rural areas. However, RUS is now at a significant crossroads. The size of the population in the areas served by many of RUS’ borrowers has changed over time, as have the financial resources available to borrowers. Furthermore, spurred by recent legislative and/or technological changes, increasing competition in the electricity and telecommunications industries may have an impact on many of the agency’s borrowers. We recognize that difficult decisions are necessary to improve the effectiveness and reduce the cost of these loan programs as well as to decrease RUS’ vulnerability to losses in operating the programs. It may be hard to accomplish all these objectives simultaneously. Recognizing that there would be trade-offs with any changes to RUS’ electricity and telecommunications loan programs, the Congress has a number of options that it could consider in its deliberations on the future of RUS’ programs, including the following: To ensure that RUS’ assistance is targeted to rural areas with sparse populations, the Congress could apply a population threshold test to the service areas of borrowers who apply for any RUS loan—not only for initial loans but also for any subsequent loans. To target subsidized direct loans to borrowers in need of RUS’ assistance and to control program costs, the Congress could make financial tests a part of the eligibility criteria for the various types of direct loans in both programs. Additionally, cost-of-money rate loans could be established in the electricity program for borrowers that do not meet the financial tests for municipal rate loans. Furthermore, the interest rates for municipal rate loans and cost-of-money rate loans, if established in the electricity program, could be set no lower than the rate on a hardship rate loan. Finally, a test could be established to require a borrower to seek commercial credit as a condition for RUS’ assistance. To assist in moving financially healthy borrowers with direct loans to the commercial sector, the Congress could have RUS establish a graduation program to require borrowers to attempt to have their outstanding direct loans refinanced by commercial credit sources. To limit the level of the agency’s vulnerability to losses, the Congress could set limits on the total amount of money that RUS provides or guarantees on any one loan and on the total amount of outstanding debt that any one borrower can accumulate through a combination of loans. To further control RUS’ vulnerability to losses on guaranteed loans, the Congress could set the repayment provision at less than 100 percent. To ensure that RUS does not increase its vulnerability to losses by making loans to certain risky borrowers, the Congress could provide guidance specifying that a borrower is ineligible for a direct or a guaranteed loan if the borrower is delinquent or if the borrower has caused RUS to incur a prior loan loss. We provided a draft of this report to USDA for its review and comment. In summary, USDA expressed concern over several of the options presented in the report, particularly those involving targeting loans, graduating borrowers, and limiting borrowers’ loan and debt levels. In regard to targeting loans, USDA noted, among other things, that a borrower serving a combination of rural and nonrural customers is probably financially stronger than a borrower that does not serve a diverse customer base. We agree. Our point, however, is that some borrowers serve large numbers of customers, including some in nonrural areas, and that the Congress may want to target loans to borrowers who serve rural areas more exclusively. USDA’s discomfort over options involving graduating borrowers and limiting borrowers’ loan and debt levels reflects, in part, concern over possible detrimental impacts that these options may have on borrowers or their service to rural areas. It is difficult to predict the extent to which USDA’s concerns would be realized if these options were to be put into effect. However, we believe that the possible impacts to service in rural areas should be considered in developing specific implementation plans for these or any other options that the Congress may choose to act upon. Overall, USDA’s comments provide additional perspectives on issues discussed in the report and highlight the difficulties that face policymakers as they consider options for improving the effectiveness and efficiency while reducing the cost to the government of RUS’ electricity and telecommunications loan programs. A complete presentation of USDA’s comments and our response is provided in appendix III. We performed our review of the operations of RUS’ electricity and telecommunications loan programs from May 1997 through December 1997 in accordance with generally accepted government auditing standards. Our scope and methodology are discussed in appendix IV. As agreed, unless you publicly announce its contents earlier, we plan no further distribution of this report until 14 days from the date of this letter. At that time, we will send copies of this report to the appropriate Senate and House committees; interested Members of Congress; the Secretary of Agriculture; the Administrator of RUS; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. Please call me at (202) 512-5138 if you or your staff have any questions. Major contributors to this report are listed in appendix V. This appendix contains information on the financial characteristics of borrowers that obtained electricity and telecommunications loans during calendar years 1994 through June 30, 1997. Table I.1 shows that the overwhelming majority of the borrowers had equity of $1 million or more at the end of the year prior to receiving the loans. Table I.2 shows that most of these borrowers made a profit of at least $100,000 in the year prior to receiving the loans. Tables I.3, I.4, and I.5 show that the current ratios, debt-to-asset ratios, and times-interest-earned ratios of the borrowers were generally favorable prior to receiving the loans. This appendix contains information on the dollar value of electricity and telecommunications loans made to borrowers during fiscal years 1994 through June 30, 1997. Table II.1 shows that while most of the 926 loans approved during this period were made for less than $10 million, 113 loans were for $10 million or more. The following are GAO’s comments on the U.S. Department of Agriculture’s (USDA) letter dated December 18, 1997. 1. The draft reviewed by USDA contained no GAO recommendations; rather, as requested by the Senate Committee on Agriculture, Nutrition, and Forestry, it presented several options for congressional consideration and recognized that there would be tradeoffs for any option implemented. 2. Our report referenced the 7 U.S.C. 930 provision in a relatively narrowly focused discussion of how an applicant’s financial health affects its eligibility to obtain RUS’ loans. As a result, we had no reason to discuss the other parts of the provision that dealt with broader policy statements on the availability of RUS’ loan funds. We therefore continue to believe that we cite the provision appropriately and that it indicates congressional intent that borrowers in both programs should be encouraged and assisted to use their own resources or seek credit through commercial sources to satisfy their needs. 3. We believe that changes in the composition of a borrower’s service territory should be considered in determining an applicant’s eligibility to participate in RUS’ loan programs if the Congress is interested in targeting loans primarily to rural areas. We agree with the benefits of diversity cited by USDA—that a combination of rural and nonrural customers reduces risk and contributes to financial health. Our point is that the Congress may want to consider clarifying the level at which RUS’ loans are primarily benefiting nonrural rather than rural customers. 4. We appreciate USDA’s concerns about the changing environment in which RUS’ borrowers operate. We recognize in the report’s discussion on the continuing vulnerability to loan losses that competition may affect borrowers. 5. RUS uses net margins to refer to the bottom-line income of its cooperative borrowers; we recognize RUS’ use of this term in footnote 11 in the report. Rather than use this term, however, we use profits (net income), which is more widely recognized. Profits, or net margins, and losses, or deficits in net margins, are calculated in the same manner. That is, operating revenue less operating expenses plus or minus nonoperating income/expenses, other fixed charges (including interest expense), and other income statement adjustments. We also recognize that a cooperative’s distribution of profits/margins to its members has the effect of reducing the rates that the members pay. 6. Our intention in providing information on customer populations was to show that some borrowers serve large populations—a fact that USDA acknowledges in its response. While most RUS borrowers may be serving sparsely populated areas, as USDA points out, our purpose was to report on customer populations and identify instances in which borrowers appear to be serving areas that are not sparsely populated. Regarding the example of a telecommunications loan borrower, documentation in RUS’ files stated that the loan was intended to benefit the borrower’s entire service area—not just its rural customers. 7. Our draft report did not suggest that customer size be a criterion for program eligibility. In fact, the report acknowledges that customer service statistics are only an indicator of population density, which, in our view, should be considered if the Congress wants to target program benefits to rural areas. 8. The draft reviewed by USDA did not discuss the extent to which borrowers invested their own funds or sought nonfederal financing. Rather, it discussed the levels of equity, profit, and various ratios for borrowers that obtained loans during calendar years 1994 through June 30, 1997. 9. The draft reviewed by the Department defines equity as total assets less total liabilities—it did not state nor attempt to imply that equity is only cash. 10. We recognize that there is some judgment involved in determining benchmarks for financial ratios. This is why we presented data on the number of RUS’ borrowers having debt-to-asset ratios of 70 percent or less as well as those having debt-to-asset ratios of no more than 40 percent. 11. We agree. As the draft reviewed by USDA stated, the current ratio is a measure that shows the extent to which a borrower has sufficient current assets to cover its current liabilities. As such, it is one measure of the financial health of borrowers. 12. The draft reviewed by USDA stated that the discounted prepayment amount is the present value of a borrower’s outstanding debt. 13. The borrower we use as an example in the report is one of many borrowers that appear to be candidates for commercial lenders to refinance their outstanding direct loans. As the report states, about 39 percent of RUS’ electricity and telecommunications borrowers had equity of $10 million or more at the end of 1996. In addition, about 36 percent made a profit of $1 million or more in 1996. 14. We appreciate USDA’s concerns about requiring borrowers to refinance their direct loans with private sector financing during a time in which the environment that the borrowers operate in is changing. However, the fact is that some borrowers appear to have such highly favorable financial characteristics that we believe a graduation program is a logical step in terms of assisting them to move to private sector financing. 15. The draft reviewed by USDA recognized that the Telecommunications Act of 1996 and the Energy Policy Act of 1992 could have either positive or negative impacts on RUS’ borrowers and on the quality of the agency’s portfolio. This issue is covered in the discussion on the continuing vulnerability to loan losses. 16. We agree that the telecommunications loan program has been operated very successfully. The draft reviewed by USDA stated that there were no telecommunications loans written off during the period covered by our review and that no telecommunications loans were delinquent as of June 30, 1997. We have revised the report to reflect USDA’s comment concerning the losses in the electricity loan program. 17. USDA states that it does not agree that loan limits will reduce RUS’ vulnerability to loan losses. We believe that limits would reduce the agency’s vulnerability because individual borrowers would be restricted to a maximum amount on any one loan and on the level of debt that they could accumulate through multiple loans. 18. The extent to which these problems occur would, of course, depend on how much of a limit was placed on loans and debt. These limits could be established with the intent of balancing consideration for minimizing risk as well as optimizing operational efficiency. 19. We do not agree with USDA that the September 1997 rule adequately addresses our concerns. The rule allows borrowers whose accounts are settled, including a write-off of debt, to obtain additional loans, rather than prohibiting such borrowers from being eligible for loans. In April 1997, we reported on the financial condition of RUS’ multibillion-dollar portfolio of electricity and telecommunications loans. Subsequently, the Chairman and the Ranking Minority Member of the Senate Committee on Agriculture, Nutrition, and Forestry requested that we conduct a follow-up study focusing on RUS’ program operations, specifically looking to identify ways to (1) make the electricity and telecommunications loan programs more effective and less costly for the government and (2) decrease RUS’ vulnerability to loan losses. They also requested that we compile loan information on commercial lenders that have a significant level of lending for rural electricity and telecommunications purposes. To compile information on loans and outstanding debt, we used RUS’ automated loan records and various loan reports. We did not adjust the outstanding loan amounts to reflect the allowance for losses that RUS includes in its financial statements or assess the adequacy of reserves on the loans. To address our first two objectives—ways to make the loan programs more effective and less costly for the government and to decrease RUS’ vulnerability to loan losses—we interviewed officials at RUS’ headquarters, including the Assistant Administrators and Deputy Assistant Administrators for Electricity and Telecommunications. We reviewed in detail the Rural Electrification Act of 1936, as amended, and its legislative history; and RUS’ implementing regulations and other program operating guidance. We conducted extensive analyses of information in RUS’ various automated records. First, we identified borrowers from the automated records that received loans in calendar years 1994 through June 30, 1997, and then matched those borrowers with the agency’s databases containing borrower-submitted operational and financial information for the year prior to the one in which the loans were made. In addition, we categorized the borrowers that received loans by various incremental ranges of loan amounts. Second, we analyzed borrowers’ financial data at the end of 1996 to determine the financial characteristics of borrowers with outstanding direct loans. Third, we analyzed information covering borrowers that prepaid their direct electricity loans at a discount during fiscal years 1994 through June 30, 1997. We also interviewed RUS’ officials in Oklahoma and Missouri, and an electricity borrower and a telecommunications borrower in each of those two states. The information on the subsidy costs of the programs for fiscal years 1994 through 1996 was obtained from USDA reports. The information on interest rates that were available on municipal rate loan advances from January 1, 1994, through September 30, 1997, was obtained from RUS’ quarterly publications in the Federal Register and/or from other RUS announcements. We also extracted from RUS’ loan portfolio databases the information on borrowers that obtained advances with interest rates of less than 5 percent. We interviewed Federal Financing Bank (FFB) officials to obtain information on the bank’s participation in RUS’ loan programs. We reviewed the FFB’s annual financial statements and independent auditor’s reports for fiscal years 1994 through 1996. We also reviewed the provisions in the Balanced Budget Act of 1997 that relate to the FFB’s participation in RUS’ programs. We obtained the information on problem borrowers, including borrowers that caused losses, from interviews of RUS officials, including those in the electricity program; testimony by RUS’ Administrator at a July 8, 1997, hearing before the Senate Committee on Agriculture, Nutrition, and Forestry; and the agency’s financial reports and automated records. To address our third objective—information on commercial lenders that have a significant level of lending for rural electricity and telecommunications—we interviewed RUS’ loan program officials and FFB officials. We also interviewed officials with each of the private lending institutions that we identified—the National Rural Utilities Cooperative Finance Corporation, Rural Telephone Finance Cooperative, CoBank, and the St. Paul Bank for Cooperatives—and reviewed documents they provided that describe their organizations and lending activities, and, as of June 30, 1997, the extent of their outstanding loans and the quality of their loan portfolios. We did not verify the accuracy of the loan information that they provided to us, but we noted that it was consistent with data in their 1996 annual reports, which had been audited by independent auditors. We also reviewed the reporting requirements of federal banking regulators to determine if commercial banks report on their lending activities for rural electricity and telecommunications purposes. However, the regulators do not require banks to report such information. Much of the financial data presented in this report were taken from RUS’ reports and automated records, which include data submitted by borrowers. We did not verify the accuracy of the information contained in the agency’s reports and automated records. We also did not verify the accuracy of the submissions from the borrowers to RUS. We conducted our review from May 1997 through December 1997 in accordance with generally accepted government auditing standards. We provided copies of a draft of this report to USDA for review and comment. The Department’s comments and our response to them appear in appendix III and are discussed in the body of the report. We also provided extracts from our draft report to the Cooperative Finance Corporation and the Rural Telephone Finance Cooperative, and to CoBank and the St. Paul Bank, which covered their respective lending activity. We made technical corrections to the report on the basis of their comments. Oliver H. Easterwood The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Rural Utilities Service's electricity and telecommunication loan programs, focusing on: (1) ways to make the loan programs more effective and less costly for the government; (2) ways to decrease the Rural Utilities Service's vulnerability to loan losses; and (3) loan information on commercial lenders that have a significant level of lending for rural electricity and telecommunication purposes. GAO noted that: (1) because loan programs are intended to assist in the development of the nation's rural areas, targeting loans to borrowers that provide services to areas with low populations could result in the more effective use of the agency's limited loan funds; (2) current lending practices sometimes result in loans to borrowers serving areas that are heavily populated; (3) targeting subsidized direct loans to borrowers that need the agency's assistance to fund their utility projects could result in the more effective use of the loan funds and reduce the level of subsidized loans and program costs; (4) the agency sometimes makes its subsidized direct loans to borrowers capable of using their own resources or of obtaining loans from the private sector to fund their utility projects; (5) graduating the agency's financially viable borrowers from direct loans to commercial credit could also reduce program costs; (6) opportunities also exist to decrease the Rural Utilities Service's vulnerability to losses; (7) the agency's vulnerability could be lessened if loan and indebtedness limits were established; (8) borrowers have been able to obtain large-dollar loans and accumulate large amounts of debt because such limits are generally lacking; (9) the repayment guarantee that the agency places on loans made by other lenders could be reduced so that lenders holding the guaranteed loans bear some portion of the financial risk; (10) the agency guarantees the repayment of loans made by other lenders at 100 percent; (11) because all guaranteed loans in recent years have been made by the Treasury's Federal Financing Bank, the risk to the federal government as a whole would not be reduced if the Federal Financing Bank continues to be the sole source of loan funds; (12) although the agency did not make or guarantee loans to such borrowers during the period covered by GAO's review, there are no policies prohibiting additional loans to such borrowers; (13) the Rural Utilities Service is not the only provider of credit to rural utilities; (14) two commercial lenders are actively involved in lending to rural electricity and telecommunications providers; and (15) these two lenders had approximately $13.1 billion in outstanding principal on loans for rural electricity and telecommunication purposes.
FPS MegaCenters provide federal agencies with three primary security services—alarm monitoring, radio monitoring, and dispatch—through four locations using a variety of IT systems. MegaCenters monitor intrusion, panic, fire/smoke, and other alarms. They also monitor FPS police officers’ and contract guards’ radio communication to ensure their safety and to provide information, such as criminal background or license plate histories, to officers upon request. In addition, they exercise command and control authority by dispatching FPS police officers or contract guards. MegaCenters also provide a variety of other services. For example, they notify federal agencies regarding national emergencies and facility problems and remotely diagnose problems with federal agency alarms. They also receive and transcribe FPS police officer incident reports. Individual MegaCenters may also provide unique services not provided by other MegaCenters, such as facility-specific access control and remote programming of alarms via the Internet. One MegaCenter also provides an after-hours telephone answering service for the Drug Enforcement Administration and for GSA building maintenance emergencies. The MegaCenters are located in Battle Creek, Denver, Philadelphia, and Suitland. Each MegaCenter has a sister center with redundant capability as backup in case of a failure at that MegaCenter. Suitland is paired with Battle Creek, and Philadelphia is paired with Denver. A force of 1,014 FPS police officers and 6,842 contract guards is available for the MegaCenters to dispatch in response to alarms and other emergencies. In fiscal year 2006, the MegaCenters were supported by a budget of $23.5 million, which accounts for about 5 percent of FPS’s total budget. The MegaCenters are operated by 23 full-time federal employees—some of whom manage the centers—and about 220 private contractors to provide around the clock security services for over 8,300 federal facilities. The MegaCenters rely on a variety of IT systems, communications systems, and other equipment to provide their security services. The IT systems enable MegaCenter staff to, among other activities, monitor alarms and radio communications of FPS police officers and contract guards. For communications systems, MegaCenters have regional and national toll-free numbers for tenants and the public to contact the MegaCenters during emergencies. Other equipment includes dictation machines, which enable FPS police officers to dictate reports about incidents that occur at facilities. MegaCenters use various means to assess operations, but their performance measures have weaknesses and are not linked to FPS-wide performance measures. MegaCenter managers assess MegaCenter operations through a variety of means, including reviewing data about volume and timeliness of operations, listening to and evaluating a sample of calls between operators and FPS police officers and contract guards, and receiving informal feedback about customer satisfaction. FPS managers also have developed 11 performance measures for assessing MegaCenter operations: distribute emergency notification reports (also known as SPOT reports) within 30 minutes of notification; review problem alarm reports daily; obtain regular feedback about customer satisfaction from field continuously review all SPOT reports and other outgoing information to ensure 100 percent accuracy; transcribe dictated offense and incident reports into the database management system within 8 hours of receipt of the report; submit reviewed contractor billing reports and time sheets within 7 business days after the last day of the month; prepare and review contractor reports for quality assurance plan; maintain completely accurate (nonduplicative) case control numbers; meet Underwriters Laboratories (UL) guidelines and requirements test failover of alarm, radio, and telephone systems weekly; and monitor calls and review recorded call content for adherence to standard procedures at least monthly. The Government Performance and Results Act of 1993 requires federal agencies to, among other things, measure agency performance in achieving outcome-oriented goals. Measuring performance allows organizations to track the progress they are making toward their goals and gives managers critical information on which to base decisions for improving their progress. We have previously reported on some of the most important attributes of successful performance measures. These attributes indicate that performance measures should (1) be linked to an agency’s mission and goals; (2) be clearly stated; (3) have quantifiable targets or other measurable values; (4) be reasonably free of significant bias or manipulation that would distort the accurate assessment of performance; (5) provide a reliable way to assess progress; (6) sufficiently cover the program’s core activities; (7) have limited overlap with other measures; (8) have balance or not emphasize one or two priorities at the expense of others; and (9) address governmentwide priorities of quality, timeliness, efficiency, cost of service, and outcome. We assessed the 11 FPS MegaCenter performance measures against selected attributes: linkage to mission and goals, clarity, and measurable targets. Ten of the 11 MegaCenter performance measures were aligned with FPS’s mission to protect federal properties and personnel and with the MegaCenter program’s mission to provide high-quality and standardized alarm monitoring, radio monitoring, and dispatch. We found no link between timely review of contractor time sheets and billing statements and FPS’s mission, however, primarily because this measure seems to be related to administrative activities. In addition, while 6 of the 11 performance measures have measurable targets—a key component for measuring performance, none of the MegaCenter performance measures met the clarity attribute because FPS could not provide information about how managers calculate the measures—a key component in the clarity attribute. For example, the performance measure that the centers test the failover ability of alarm, radio, and telephone systems weekly is measurable because it has a quantifiable target but does not meet the clarity attribute because FPS could not describe its methodology for calculating it. We also assessed whether, collectively, the MegaCenters’ 11 performance measures sufficiently cover their core program activities (i.e., alarm monitoring, radio monitoring, and dispatch) and address governmentwide priorities of quality, timeliness, efficiency, cost of service, and outcome. Most of the MegaCenter performance measures relate to the three core activities. For example, regular feedback on customer service and monthly review of operator calls cover aspects of the dispatch and radio-monitoring functions. Other performance measures, like distributing emergency notification reports in 30 minutes, help fulfill other critical support functions. However, two performance measures—reviewing contractor quality assurance plans and timely review of contractor time sheets and billing statements—relate to administrative activities that are not strictly related to MegaCenter core activities. Additionally, the MegaCenter performance measures do not collectively address all of the governmentwide priorities. The MegaCenter performance measures primarily address the governmentwide priorities of quality and timeliness. For example, the MegaCenter measures pertaining to transcribing reports within 8 hours and reviewing recorded calls to see if the operator followed standard operating procedures address aspects of service timeliness and quality, respectively. None of the measures relate to the governmentwide priorities of efficiency, cost of service, and outcome. Finally, FPS does not link MegaCenter performance measures to FPS-wide performance measures, specifically the patrol and response time measure. FPS established FPS-wide performance measures to assess its efforts to reduce or mitigate building security risks. The performance measures that FPS established were (1) timely deployment of countermeasures, (2) functionality of countermeasures, (3) patrol and response time, and (4) facility security index. The one measure that relates to the MegaCenters— patrol and response time—assesses FPS’s ability to respond to calls for service and measures the average elapsed time from when a law enforcement request is received (e.g., alarm, telephonic request from a building tenant, FPS police officer-initiated call) to the time an officer arrives at the scene. FPS’s goal is to reduce response times by 10 percent in fiscal year 2006. The MegaCenters are responsible for part of the patrol and response activity that is being measured because the MegaCenters receive alarms and emergency calls and dispatch FPS police officers or contract guards to the scene. However, although data pertaining to this activity exist in the MegaCenters’ records management system, they do not measure the timeliness of this activity, and FPS has not developed a performance measure that would identify the MegaCenters’ contribution toward meeting FPS’s measure. The nine selected security organizations generally do not provide all three of the MegaCenters’ primary services. However, the services these organizations offer are provided similarly by the MegaCenters with the exception of a CAD system, which three organizations use and the MegaCenters do not. The MegaCenters provide three primary services (i.e., alarm monitoring, radio monitoring, and dispatch), and the selected organizations provide all or some of these three main services. For example, the Park Police provide all three services, while the private organizations focus on providing alarm monitoring and offer some services the MegaCenters do not. Like the MegaCenters, all of the private organizations reviewed have centralized operations: the number of their national control centers ranges from two to five. Work allocation (i.e., how incoming alarms and calls are assigned) among centers varies by organization but overall is similar to the MegaCenter structure. For example, most of the organizations assign calls and alarms to a specific center based on the geographic location of the call or signal. However, the Postal Inspection Service and one private organization are unique because they are able to allocate workload to centers based on demand and operator availability. The organizations use a variety of methods to measure the quality of their services, many similar to methods used by the MegaCenters. For example, like the MegaCenters, most review a sample of operator calls on a regular basis. Two entities have established measurable performance goals for their centers. While there are similarities in the services offered, number of centers, work allocation, and service quality appraisals between the organizations reviewed and the MegaCenters, three organizations use a CAD system, which the MegaCenters do not. A CAD system is a tool used by the Denver Police Department for dispatching and officer tracking and by the Postal Inspection Service for officer tracking. The Park Police also uses a CAD system with limited capabilities at its San Francisco center and plans to purchase and upgrade the system for all three of its centers. Selected organizations and associations referred to CAD systems as being beneficial for dispatching services by allowing for faster operator response, automatic operator access to standard operating procedures and response prioritization, and automatic recording of operator actions enabling easier performance analysis. Since 2003, FPS and DHS both have assessed MegaCenter technology and have identified needs for technology upgrades, including the installation of a CAD system for the MegaCenters. Our guide on IT investment decision making—based on best practices in the public and private sector—stresses that part of achieving maximum benefits from an IT project requires that decisions be made on a regular basis about the status of the project. To make these decisions, senior managers need assessments of the project’s impact on mission performance and future prospects for the project. While the MegaCenters have assessed their technology on many occasions and have determined that some refreshment is needed, FPS has not yet allocated the funding for such upgrades. FPS MegaCenters play a key role in protecting federal facilities, those who enter these facilities, and the FPS police officers and contract guards whose calls the MegaCenters respond to and monitor. How well the MegaCenters are fulfilling their role and carrying out their responsibilities is uncertain because they do not generate much of the information that would be useful for assessing their performance. To their credit, the MegaCenters have established performance measures for a number of their activities and operations, and these measures are aligned with the MegaCenters’ mission. However, the measures have weaknesses, both individually and collectively, compared with the selected attributes of successful performance measures that we have identified. Many of the individual measures are neither quantifiable nor clearly stated, and collectively the measures do not address the governmentwide priorities of efficiency, cost of service, and outcome. As a result, FPS cannot compare performance across the MegaCenters or over time, and without such information, FPS is limited in its ability to identify shortfalls and target improvements. Although FPS has established an FPS-wide performance measure for response time—from the alarm to the FPS police officer’s arrival on the scene—that incorporates the MegaCenters’ operations, the MegaCenters have not established a comparable measure for their operations alone. Without such a measure, FPS cannot evaluate the MegaCenters’ contribution—from the alarm to the FPS police officer’s dispatch—to the FPS-wide measure for response time and identify opportunities for improvement. We recommend that the Secretary of Homeland Security direct the Director of the Federal Protective Service to take the following three actions: establish MegaCenter performance measures that meet the attributes of successful performance measures we have identified; develop a performance measure for the MegaCenters that is directly linked to the FPS-wide response time measure and covers the scope of the MegaCenters’ operations, from alarm to dispatch; and routinely assess the extent to which the MegaCenters meet established performance measures. We provided a draft of this report to DHS, the Department of the Interior, and the U.S. Postal Service for their review and comment. DHS provided comments in a letter dated September 6, 2006, which are summarized below and reprinted in appendix II. DHS also provided technical comments, which we incorporated into the report where appropriate. The Postal Service informed us that it had no comments on this report. The Department of the Interior did not provide comments on this report. DHS generally agreed with the report’s findings and recommendations. DHS stated that FPS and the U.S. Immigration and Customs Enforcement (ICE) have undertaken a comprehensive review of the MegaCenters to identify, among other things, ways in which performance can be better measured. DHS noted that through this broad approach, FPS personnel will be able to generate and track the kind of information necessary to assess the MegaCenters’ performance. This one-time review may help FPS identify information needed to assess the MegaCenters’ performance and, therefore, develop appropriate performance measures. In order to reliably assess performance over time, FPS should not only establish appropriate performance measures, but also routinely assess performance using these measures. We therefore clarified our recommendation to include the routine use of established performance measures to assess the MegaCenters’ performance. With regard to the report’s discussion of CAD system capabilities, DHS said that ICE’s Chief Information Officer is currently assessing the MegaCenters’ technology requirements and recognizes that previous studies have identified the need for technology upgrades. DHS indicated that the current assessment will have a meaningful impact on FPS’s technology capabilities. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to other interested congressional committees and the Secretary of Homeland Security, and DHS’s Assistant Secretary for Immigration and Customs Enforcement. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-2834 or sciremj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Since the 1995 bombing of the Alfred P. Murrah Federal Building in Oklahoma City and the September 11, 2001, attacks on the World Trade Center and Pentagon, terrorism has threatened the nation’s security, including the physical security of federal facilities. The Homeland Security Act of 2002 created the Department of Homeland Security (DHS), a new federal department with the mission of preventing terrorist attacks within the United States, which includes safeguarding federal facilities. DHS, through its Federal Protective Service (FPS), provides law enforcement and security services to federal agencies that occupy almost 9,000 facilities under the jurisdiction of the General Services Administration (GSA) and DHS, protecting millions of federal employees, contractors, and citizens. Under agreement, FPS authority can be extended to provide its law enforcement and security services to any property with a significant federal interest. As part of its approach to facility protection, FPS provides support for its law enforcement and security services through four control centers (known as MegaCenters) located in Battle Creek, Michigan; Denver, Colorado; Philadelphia, Pennsylvania; and Suitland, Maryland. Because of the important role MegaCenters play in assuring the safety of federal facilities and their occupants, our objectives were to: (1) Identify the services the MegaCenters provide and how they provide them. (2) Determine how FPS assesses and measures the performance of MegaCenter operations and how FPS links MegaCenter performance measures to FPS-wide performance measures. (3) Examine how the MegaCenters compare to selected security organizations in the services they provide and in the methods they use to provide them. Document review: Reviewed the Memorandum of Agreement between GSA and FPS and other documentation related to MegaCenter services as well as documentation related to (1) FPS’s request for a computer aided dispatch (CAD) system for the MegaCenters; (2) past FPS assessments of MegaCenter operations; (3) FPS’s performance measures; and (4) FPS’s budget for the MegaCenters. Interviews: Interviewed FPS officials, including MegaCenter branch chief and managers, and staff from the Program Review Office, Financial Management Division, and other offices; Immigration and Customs Enforcement’s (ICE) Budget Enforcement Office; and officials from selected public and private organizations; officials from security industry standard setting and accreditation associations (associations). U.S. Customs and Border Protection U.S. Park Police U.S. Postal Inspection Service Denver Police Department 5 private security companies We conducted our review in accordance with generally accepted government auditing standards. Remote monitoring of building alarm systems, radio monitoring, and dispatching of FPS police officers and contract guards are the primary services FPS MegaCenters provide. These and other services are provided around the clock from four locations across the country. Each MegaCenter has a sister center with redundant capabilities that can serve as an emergency backup and each is operated by full-time federal employees and private contractors. In addition, the MegaCenters have a fiscal year 2006 budget of $23.5 million and use a variety of information technology (IT) systems and other equipment to provide their services. FPS MegaCenter managers assess MegaCenter operations through a variety of means, including reviewing information on the timeliness and volume of operations, listening to and evaluating a sample of calls between operators and FPS police officers and contract guards, and receiving informal feedback about customer satisfaction. FPS managers have also developed performance measures for assessing MegaCenter operations. Although these MegaCenter measures reflect some attributes of successful performance measures, they also contain some weaknesses because they are not always clearly stated or measurable, and do not address governmentwide priorities of efficiency, cost of service, and outcome. In addition, they do not directly measure key operations that would link to FPS-wide performance measures, which are (1) the timely deployment of countermeasures, (2) functionality of countermeasures, (3) patrol and response time, and (4) facility security index. The nine selected organizations offer some of the MegaCenters’ primary services, and they deliver and assess the services they offer in a generally similar manner to the MegaCenters. For example, like the MegaCenters, many of these organizations have centralized their control center operations, have backup capability, allocate workload among control centers based on geographic location, and use regular call reviews as well as volume and time measures to assess the quality of the services they provide. A few organizations offer services the MegaCenters do not offer. One difference between the MegaCenters and the selected organizations is that three of these organizations use a CAD system, which the MegaCenters do not have. The MegaCenters have assessed their technology and have identified the need for a CAD; however FPS has not allocated funds for such a purchase. FPS operations are solely funded through security fees and reimbursements collected from federal agencies for FPS security services. These security fees consist of basic and building-specific security charges. The basic security charges cover the security services that FPS provides to all federal tenants in FPS-protected buildings, which include such services as patrol, monitoring of building perimeter alarms and dispatching of law enforcement response (MegaCenter operations), criminal investigations, and security surveys. The building-specific security charges are for FPS security measures that are designed for a particular building and are based on the FPS Building Security Assessment and its designated security level. Such measures include contract guards, X-ray machines, magnetometers, cameras, and intrusion detection alarms. Also, the tenant agencies may request additional security services such as more guards, access control systems, and perimeter barriers. The above two charges are billed monthly to the tenant agencies. The basic security charge is the same for all tenants regardless of the type of space occupied and is a square footage rate. The building-specific security charge reflects FPS cost recovery for security measures specific to a particular building and the billing is handled differently for single- and multi-tenant buildings. Single tenant buildings—the tenant agency is billed for the total cost of the security measures. Multi-tenant buildings—the tenant agencies are billed based on their pro rata share of the square feet occupied within the respective building. FPS uses a reimbursable program to charge individual agencies for additional security services and equipment that they request above the level determined for their building. FPS bills the tenant agencies for FPS security fees they have incurred. The agencies pay the fees into an FPS account in the Department of the Treasury, which is administered by FPS. Congress exercises control over the account through the annual appropriations process that sets an annual limit—called obligation authority—on how much of the account FPS can expend for various activities. FPS uses the security fees to finance its various activities within the limits that Congress sets. The Department of Homeland Security Appropriations Act for fiscal year 2006 authorized $487 million in obligation authority for FPS expenses and operations. Through FPS’s security fees, funds are to be collected and credited to FPS’s account as an offsetting collection from tenant agencies. Under the FPS reimbursable program, agencies request additional security services and equipment using a funded Security Work Authorization. Once the services are provided and the costs are expensed, FPS bills the agency for the costs, and the funds are transferred to the FPS account to offset the expenses FPS incurred. The DHS Inspector General reported in 2006 that when FPS was part of GSA it budgeted and paid for FPS's annual administrative support costs such as financial management, human capital, and IT using funds beyond those generated by security fees. GSA estimated these FY 2003 support services to cost about $28 million. According to the report, beginning in FY 2004, neither DHS’s annual budget request nor DHS’s appropriations set aside funding for FPS's support services. In FY 2004, as a component of DHS, FPS paid almost $24 million for support services using funds from security fees only; a year earlier these services had been funded by GSA using funds not derived from fees. Before GSA established the MegaCenters, FPS used regional and satellite control centers to monitor alarm systems, dispatch FPS police officers and contract guards, and perform criminal background checks. In total, there were 22 regional control centers and 12 satellite control centers, which were located throughout FPS’s 11 regions. Most regions had more than 1 control center. In 1991, GSA conducted an internal review of the control centers. The review found that because of significant budgetary and personnel constraints over more than a decade, the control centers no longer performed well enough to ensure safe, effective, and efficient FPS actions to preserve life and property. GSA contracted with Sandia National Laboratories– the lead laboratory for U.S. Department of Energy security systems– to conduct an in-depth study of the control centers’ operation and make recommendations. In 1993, Sandia issued its study entitled GSA Control Center Upgrade Program. The Sandia study identified serious shortfalls and problems that would require a more radical upgrade of the control centers at a much higher cost than originally believed. After validating the study’s findings, GSA determined that a multimillion dollar upgrade of all control centers would be prohibitively expensive. The study noted that the control centers could be consolidated to almost any level to achieve economies of scale. However, the study recommended against a single national-level control center because a second center would be needed to continue operations under catastrophe or failover conditions. Background: Evolution of the MegaCenters, GSA concluded that the control center problems that the study identified were material weaknesses and reported them to Congress. FPS conducted an operational and technical review of the Sandia study’s findings, which provided a critical assessment of the control centers, a high-level concept of operations for the centers, and functional specifications for upgrading the centers. GSA decided to upgrade 11 control centers—one in each region—and address the weaknesses that the study had identified. Within GSA, concerns were raised about the cost of upgrading 11 control centers, how many control centers were really needed, and whether the centers’ operations could be outsourced. GSA established a project team to investigate these concerns. The team contacted several public and private sector organizations that operate control centers. The team found that the organizations were consolidating their control centers but were unable to assume the operations of FPS control centers. A decision was made to consolidate additional centers and the multi-regional control center or “MegaCenter” concept was developed. GSA endorsed the MegaCenter concept. GSA assembled a core project team and hired contractors to design, plan, and supervise the construction of the centers. ensure centers are meeting their goals and providing quality services, many similar to the MegaCenters. Mathew J. Scire (202) 512-2834 or sciremj@gao.gov. Other key contributors to this report were Gerald P. Barnes, Assistant Director; Deirdre Brown; Bess Eisenstadt; Colin Fallon; Brandon Haller; Richard Hung; Alex Lawrence; Gail Marnik; and Josh Ormand.
The Department of Homeland Security's Federal Protective Service (FPS) through its control centers (MegaCenters) helps provide for the security and protection of federally owned and leased facilities. This report (1) identifies the services MegaCenters provide, (2) determines how FPS assesses MegaCenter performance and whether FPS links MegaCenter performance measures to FPS-wide measures, and (3) examines how MegaCenters and selected organizations compare in the services they provide. To address these issues, GAO reviewed FPS's performance measures and past MegaCenter assessments, assessed the MegaCenters' performance measures, and interviewed officials and collected relevant information at FPS, the four MegaCenters, and nine selected security organizations. FPS MegaCenters provide three primary security services--alarm monitoring, radio monitoring, and dispatching of FPS police officers and contract guards. These and other services are provided around the clock from four locations--Battle Creek, Michigan; Denver, Colorado; Philadelphia, Pennsylvania; and Suitland, Maryland. With a fiscal year 2006 budget of $23.5 million, the MegaCenters monitor alarms at over 8,300 federal facilities, covering almost 381 million square feet, and have available for dispatch over 7,800 FPS police officers and contract guards. FPS MegaCenter managers assess MegaCenter operations through a variety of means, including reviewing data about volume and timeliness of operations, listening to and evaluating a sample of calls between operators and FPS police officers and contract guards, and receiving informal feedback about customer satisfaction. FPS managers have also developed performance measures for assessing MegaCenter operations. However, these measures are of limited use because they are not always clearly stated or measurable and do not address governmentwide priorities of efficiency, cost of service, and outcome--which are among the attributes that GAO has identified for successful performance measures. In addition, the MegaCenters do not measure a key activity--the time from alarm to officer dispatch--that would link MegaCenter performance to an FPS-wide performance measure. Without this measure, FPS is limited in its ability to evaluate the MegaCenters' contribution to the FPS-wide measure of response time. Nine selected security organizations--including federal and local police and private entities--offer some of the MegaCenters' services as well as provide and assess these services in a manner that is generally similar to the MegaCenters. Like the MegaCenters, many of the selected organizations have centralized their operations. They also use regular call reviews and volume and time measures to assess the quality of the services they provide. A major difference between the MegaCenters and some selected organizations is the use of a computer-aided dispatch system, which enables these organizations to automate many functions.
The Department of the Interior’s National Park Service is responsible for managing a large and diverse array of park units that include some of the most significant natural and cultural resources in the nation. In recent years, concern has grown that the parks’ responsibilities and popularity might be hampering the parks’ ability to serve visitors and manage resources. The national park system now hosts about 270 million visitors a year—an increase of more than 20 percent since 1985. The National Park Service is the caretaker of many of the nation’s most precious natural and cultural resources. Today, more than 100 years after the first national park was created, the national park system has grown to include 368 units. These units cover over 80 million acres of land and include an increasingly diverse mix of sites. In fact, there are now 20 different categories of park units. These include (1) national parks, such as Yellowstone in Idaho, Montana and Wyoming; Yosemite in California; and Grand Canyon in Arizona; (2) national historical parks, such as Harpers Ferry in Maryland, Virginia and West Virginia; and Valley Forge in Pennsylvania; (3) national battlefields, such as Antietam in Maryland; (4) national historic sites, such as Ford’s Theatre in Washington, D.C.; (5) national monuments, such as Fort Sumter in South Carolina and the Statue of Liberty and Ellis Island in New York; (6) national preserves, such as Yukon-Charley Rivers in Alaska; and (7) national recreation areas, such as Lake Mead in Arizona and Nevada and Golden Gate in California. The Park Service’s mission has dual objectives. On one hand, the Park Service is to provide for the public’s enjoyment of the resources that have been entrusted to its care. This objective involves promoting the use of the parks by providing appropriate visitor services and the infrastructure (e.g., roads and facilities) to support them. On the other hand, the Park Service is to protect its natural and cultural resources so that they will be unimpaired for the enjoyment of future generations. Balancing these objectives has long shaped the debate about how best to manage the national park system. The debate has also been shaped by a number of other developments. Despite the fiscal constraints facing all federal agencies, the number of parks continues to expand—31 parks have been added to the system in the last 10 years. In addition, the maintenance backlog at national parks has increased substantially. In 1988, we reported that the dollar amount of the backlog of deferred maintenance stood at about $1.9 billion. This backlog included items that ranged from such routine activities as trimming trees, maintaining trails, and repairing buildings to such major capital improvements as replacing water and sewer systems and reconstructing roads. While agency officials acknowledged that they do not have precise data on the backlog, they estimated that it exceeded $4 billion in 1994. As agreed with the congressional requesters (see p. 2), we focused our review on 12 park units within the national park system. We judgmentally selected four national parks, two historic parks and one historic site, two national monuments, a national battlefield, a recreation area, and a national seashore. These units represent a cross section of units within the national park system. They include both large and small parks, natural and scenic parks, culturally and historically significant parks, and parks from 7 of the 10 Park Service regions in the country. However, because they are not a random sample of all 368 park units, they may not be representative of the system as a whole. Table 1.1 lists the 12 park units that we visited. For each of the 12 parks, we collected available data on the condition and the trend of visitor services and park resources. We obtained visitor service data on facilities (e.g., visitor centers, campgrounds, trails, and roads); personal services (e.g., interpretive programs and other face-to-face programs); nonpersonal services (e.g., self-guided tours and exhibits); and visitor protection (e.g., emergency medical aid, search and rescue assistance, and law enforcement). Concerning resources, we collected condition and trend data on natural resources (e.g., native animals and plants, air and water, exotic species, and threatened or endangered species) and cultural resources (e.g., sites, structures, objects or collections, and cultural landscapes). We also interviewed officials at Park Service headquarters and regional offices as well as at each park visited. This report builds on our March 7, 1995, testimony before a joint hearing of the Subcommittee on Parks, Historic Preservation, and Recreation, Senate Committee on Energy and Natural Resources, and the Subcommittee on National Parks, Forests, and Lands, House Committee on Resources. It also draws on the 26 reports and testimonies that we have issued over the last 5 years on a wide range of Park Service activities and related programs. (For a list of related GAO products, see the end of this report). We conducted our review from April 1994 through July 1995 in accordance with generally accepted government auditing standards. The natural beauty and historical settings of the national parks make visits by most people a pleasurable and often inspiring experience. Surveys by the Park Service and others show that, in general, visitors are very pleased with their experience at national parks. Nonetheless, we found cause for concern about the health of the park system in terms of both visitor services and resource management. The scope and quality of visitor services provided by the Park Service are deteriorating, and a lack of sufficient data on the condition of many natural and cultural resources in the parks raises questions about whether the agency is meeting its mission of preserving and protecting the resources under its care. Of the 12 parks included in our review, 11 had recently cut back the level of visitor services. This reduction is particularly significant considering that managers at most of the parks told us that meeting visitors’ needs gets top priority, often at the expense of other park activities. The following are examples of the cuts in service: At Padre Island National Seashore in Texas, no lifeguards were on duty along the beach during the summer of 1994 to help ensure the safety of swimmers for the first time in 20 years, according to a park official. The beach is one of the primary attractions of the park and hosted an average of 1,300 visitors during summer weekend days in 1991-92. At Shenandoah National Park in Virginia, interpretive programs to assist visitors in understanding and appreciating the natural and scenic aspects of the park were cut by over 80 percent from 1987 through 1993. According to park officials, cutbacks included not having interpreters stationed at busy overlooks and trailheads, considerably fewer guided nature walks, and considerably fewer evening campsite talks about the park’s wildlife and cultural resources. One popular campground of 186 campsites (about one-fourth of all campsites in the park) has been closed because of funding limitations since 1993 and, according to park officials, is scheduled to remain closed until at least 1998. In addition, because of limited funding, park staff have been unable to remove numerous trees that pose a hazard to visitors because they hang precariously over hiking trails. At Bandelier National Monument in New Mexico, the main museum—one of the most popular stops at the park—was closed for more than a year because of problems with repairing a leaky roof and an improperly installed security system. At the Statue of Liberty and Ellis Island in New York, the extended hours of operation to meet visitor demand during the peak summer season have been reduced by 3.75 hours each day—a reduction of about 30 percent. We were further told that the length of the season for which hours are usually extended was reduced from 3 to 2 months. At Lake Mead National Recreation Area in Arizona and Nevada, park law enforcement personnel are often faced with a backlog of up to 12 calls each in responding to the needs of visitors during the summer months. According to park officials, enforcement personnel respond to such problems as motor vehicle and boating accidents, alcohol and drug incidents, and increasing gang violence. As these examples illustrate, the cutbacks in services not only adversely affect visitors’ convenience and enjoyment, but also the Park Service’s ability to meet basic visitor safety needs. Table 2.1 provides more details on the condition of visitor services at each of the parks included in our review. Park Service policy directs that parks be managed on the basis of knowledge of their natural and cultural resources and their condition. Without sufficient scientific data depicting the condition and trends of park resources, the Park Service cannot adequately perform its mission of preserving and protecting its resources. However, our review indicated that by and large, the condition and trend of many park resources are largely unknown because of the absence of sufficient information—particularly for parks featuring natural resources, such as Glacier in Montana and Yosemite in California. The effective management of park resources depends heavily upon scientifically collected data that enable park managers to detect damaging changes to the parks’ resources and guide the mitigation of those changes. This approach involves collecting baseline data about key park resources and monitoring their condition over time to detect any changes. A park official told us that without such information, damage to key resources could go undetected until it is obvious, at which point, mitigation may be impossible or extremely expensive. While park officials, as well as an official from the Department of the Interior’s National Biological Survey, emphasized the need for this kind of information, we found that information is insufficient or lacking for many of the parks’ resources. This situation is not new. Over the past 30 years, more than a dozen major reviews by independent experts as well as the Park Service have concluded that resource management must be guided by more scientific knowledge. From the so-called “Leopold” and “Robbins” reports of 1963 to the report on the 75th Anniversary Symposium on the National Park Service (the “Vail Agenda”) in 1992 to a Natural Research Council report of 1992, concerns have been raised about the lack of scientific data on park resources. Similar concerns have been echoed by park advocacy groups, such as the National Parks and Conservation Association, and by two former Park Service Directors. Overall, managers at the culturally oriented parks we visited, such as the Statue of Liberty National Monument and Ellis Island in New York and Hopewell Furnace National Historic Site in Pennsylvania, reported that (1) the condition of cultural resources was declining and (2) the location and status of many cultural resources—primarily archeological—are largely unknown. Ellis Island is an example of a park where the condition of cultural resources is declining. It was reopened in 1990 as the country’s only museum devoted exclusively to immigration. While a few of the island’s structures have been restored, 32 of 36 significant historic buildings have seriously deteriorated. According to park officials, about two-thirds of these buildings could be lost within 5 years if they are not stabilized. These structures are currently not available for public access. They include the former hospital, quarantine area, and morgue. In addition, although some new storage space is being built, some of Ellis Island’s large collection of cultural artifacts is stored in deteriorating facilities. As a result, in one building, much of the collection is covered with dirt and debris from crumbling walls and peeling paint, and leaky roofs have caused water damage to many artifacts. An example of a park where the location and status of cultural resources—in this case, archeological—is largely unknown is Hopewell Furnace National Historic Site. This is an 850-acre park that depicts a portion of the nation’s early industrial development. The main features of the site are a charcoal-fueled blast furnace, an ironmaster’s mansion, and auxiliary structures. Although Hopewell Furnace has been a national historic site since 1938, a park official advised us that the Park Service has never performed a complete archeological survey of the park to identify and inventory all of its cultural resources. According to a park official, without comprehensive inventory and monitoring information, it is unknown whether the best management decisions about resources are being made. Also, the park does not have a current general management plan, which is required by the Park Service and serves as a central component of effective resource management. A general management plan provides basic management guidance on how a park unit and its resources will be protected, developed, and used and documents compliance with the Park Service’s management policies and regulations. Table 2.2 shows examples of cultural resource conditions at each of the 12 parks that we visited. Even at the parks we visited that showcase natural resources, little was known about natural resource conditions and trends. This situation existed because the Park Service has not systematically collected scientific data to inventory its natural resources or monitored changes in their condition over time. As a result, the agency cannot scientifically determine whether the overall condition of many key natural resources is improving, deteriorating, or remaining constant. For example, at both Yosemite and Glacier National Parks, data about many of the parks’ natural resources have not been collected. As a result, the condition and trend of these resources are largely unknown. At Yosemite, officials told us that virtually nothing was known about the types or numbers of species inhabiting the park, including birds, fish, and such mammals as badgers, river otters, wolverines, and red foxes. These officials acknowledged that the extent of their knowledge was poor because it was not based on scientific study. At Glacier, baseline information on park wildlife was similarly inadequate. Park officials indicated that most monitoring efforts were directed at four species protected under the Endangered Species Act. They did not have data on the condition and trend of many other species. Another example is Padre Island National Seashore in Texas. According to managers at this park, they did not have information on the condition of four of the seven categories of wildlife within the park. Specifically, they lacked detailed data on the condition of such species as reptiles and amphibians—except for endangered sea turtles—and such terrestrial mammals as white-tailed deer, coyotes, and bobcats. Furthermore, except for certain species, such as endangered sea turtles that use portions of the park as nesting areas, park managers had little knowledge about whether the condition of wildlife within the park was improving, declining, or remaining constant. Within the last decade, the Park Service has begun efforts to gather better information about the condition of the parks’ natural resources. According to the Deputy Director of the Park Service, it took the environmental movement of the late 1960s and early 1970s and national attention to the resource problems in the parks during the early 1980s for the Park Service to start seriously addressing natural resource concerns. However, according to the Deputy Director, progress has been limited because of insufficient funding and competing needs and the completion of much of the work is many years away. In the meantime, park managers often make decisions about the parks’ operations without knowing the impact of these decisions on the resources. For example, according to a park manager at Yosemite National Park, after 70 years of stocking nonnative fish in various lakes and waterways for recreational purposes, park officials realized that indiscriminate stocking had compromised the park’s waterways. Nonnative fish introduced into the park now outnumber native rainbow trout by four to one. According to a park official, this stocking policy, which continued until 1990, has also resulted in a decline of at least one federally protected species. Table 2.3 provides examples of the conditions of natural resources at each of the 12 parks included in our review. The National Park Service is mandated to provide for the enjoyment of visitors to some of the nation’s greatest natural and cultural resources and, at the same time, to preserve and protect those treasures. However, cutbacks in the scope and quality of services provided to visitors and a lack of sufficient information about the condition of many natural and cultural resources within national parks are affecting the Park Service’s ability to meet its mandate. While a visit to the nation’s parks is still an enjoyable and pleasant experience for most visitors, reduced park operating hours, less frequent or terminated interpretation programs, fewer law enforcement personnel, and less timely attention to visitors’ safety needs is seriously diminishing the quality of this experience. Moreover, the Park Service’s lack of progress in addressing a decades-old problem of collecting scientific data to properly inventory park resources and monitor their condition and trend over time is threatening its ability to preserve and protect the resources entrusted to it. Officials at the National Park Service generally concurred with the information presented in this chapter. They provided some clarifying language that we incorporated where appropriate. From fiscal year 1985 through fiscal year 1993, the Park Service’s operating budget rose about 14 percent, when adjusted for inflation. At most of the parks we visited, the funding increases over this period outpaced inflation. Despite these increases, the Park Service has not been able to keep up with visitor services and resource management needs. Our work identified two factors common to most of the parks we visited that substantially affected the level of visitor services and resource management activities. These factors were additional operating requirements and increased visitation. From fiscal year 1985 through fiscal year 1993, the Park Service’s operating budget rose from about $627 million to about $972 million—or by about 55 percent. After factoring in inflation, the increase still amounts to about 14 percent. At 10 of the 12 parks we visited, funding increases outpaced inflation during this time period. Increases ranged from about 2 to 200 percent. Despite these increases, additional demands on the parks are eroding the Park Service’s ability to keep up with visitor services and resource management needs. To more fully understand the management of parks, it is important to note that the majority of the operations budget—in most cases, over 75 percent for the 12 parks we visited—goes toward salary and benefit costs. The remaining amount—usually 25 percent or less—funds all other ongoing operating needs, such as utilities, supplies and materials, equipment, training, and travel. Table 3.1 shows the fiscal year 1993 operations budget and the breakdown of salary and benefits versus other costs for each of the 12 parks we visited. In order for these data to be comparable with the 1993 data we collected on park conditions, we used fiscal year 1993 as the basis for our budget analysis. Many additional operating requirements are passed on to the parks through federal laws or administrative requirements. In many cases, funds are not made available to the parks to cover the entire cost of complying with these requirements. Park managers cited numerous requirements from such laws as the Occupational Safety and Health Act and the Resource Conservation and Recovery Act. At the 12 parks we visited, park managers cited many different federal laws affecting the parks’ operations. (See app. I for a listing of these laws and their requirements.) While some of these laws were enacted over 20 years ago, the requirements to comply with them may change over time. To the extent that this occurs, the result may lead to increased operating requirements for parks. In addition, to the extent that they are not fully funded, other requirements, such as changes in employee benefits and ranger certification procedures, can significantly affect parks’ budgets and the level of visitor services and resource management activities that the parks can undertake. Park managers told us that meeting the requirements of numerous federal laws frequently means diverting personnel and/or dollars from other day-to-day park activities, such as visitor services or resource management. For example, according to a park official, in fiscal year 1994, Yosemite officials spent about $122,000 to address two federal requirements—$42,000 to correct violations of the Occupational Safety and Health Administration’s regulations and $80,000 to identify and remove hazardous waste. These costs included both personnel and nonpersonnel expenditures. Officials at Yosemite told us that no additional funds were provided to the park for these expenses and that the personnel and dollars needed to meet these requirements were therefore diverted from other planned visitor and resource management activities. At Glacier, federal requirements for lead paint abatement, asbestos removal, surface and waste water treatment, and accessibility for disabled visitors required park managers to divert staff time and operating funds from visitor and resource management activities. For example, to comply with provisions of the Safe Drinking Water Act, we were told by park officials at Glacier that they must test the water in the park’s systems for bacteria more frequently. Beginning in fiscal year 1992, instead of submitting one sample for bacterial testing each month, they were required to submit two. Additionally, the cost of each test doubled from about $7 to $15. With 27 separate water systems to test monthly, the cost of testing for bacteria only (many other tests are required for other substances) has risen from about $2,268 to $9,720 per year. The park had to absorb the $7,452 increase per year from the nonsalary portion of its operating budget—about 1 percent of the fiscal year 1993 amount. In addition, the Safe Drinking Water Act imposed new requirements that did not exist before, such as chlorinating the main water system and then dechlorinating it prior to its discharge into a river. This has added about $10,000 to $12,000 per year to the park’s water costs—about another 1 percent of the park’s nonsalary budget amount. In addition to operating requirements placed on parks by a variety of federal laws, park operating budgets are affected by required changes in personnel costs, such as compensation and benefits. Because salaries and benefits constitute such a large percentage of a park’s budget—in most cases, over 75 percent in most cases for the parks we visited—almost any increase affecting salaries that is not fully funded (e.g., cost-of-living raises, employer retirement contributions, and increased compensation for certain types of employees) will have a major impact on a park’s budget. For example, according to a headquarters official, in fiscal year 1994, the National Park Service requested and the Congress approved an upgraded civil service classification for rangers. The upgraded classification resulted in increased compensation for park rangers, beginning in the last quarter of fiscal year 1994. Although most parks received additional funds to partially offset the increased compensation costs in the first full year, some parks had to absorb large amounts from their operating budgets. Lake Mead, for example, absorbed about $200,000 in fiscal year 1995, while Shenandoah absorbed about $50,000 out of its budget for that year. We were also advised that unless additional funds are provided, future increased ranger costs will be paid by the parks. Unless park managers are willing to reduce park staffing, these additional personnel costs that the parks must absorb are diverted from the 25 percent or less of the annual operating budget that they have available after salaries and benefits. In the case of Lake Mead, the $200,000 represented about 9 percent of the fiscal year 1993 nonsalary total; for Shenandoah, the additional cost represented about 3 percent. Finally, the parks must also absorb other increases in nonpersonnel costs for activities that they are required to undertake. For example, in 1991, the Department of the Interior required that its nonseasonal law enforcement officers undergo a higher-level background check than had previously been done to better ensure their qualifications. As a result, the cost of each background check jumped from under $100 to about $1,800. Since 1991, that cost has risen to over $3,000, according to several Park Service officials. In addition, the cost of background checks for seasonal law enforcement employees is now about $1,800. In fiscal year 1994, Yosemite spent about $200,000 on background checks. This represented about 6 percent of the fiscal year 1993 operating budget available after salaries and benefits. While park managers did not disagree with the merits of the various laws and other requirements with which they must comply, they believe that when taken as a whole, operating funds available for park activities relating to visitor and resource management are significantly hampered by complying with these requirements. The second factor eroding the parks’ ability to keep up with visitor and resource needs is the increase in visitation. Eight of the 12 parks showed increases in the number of visitors; the average increase was about 26 percent since 1985. The four parks where decreases occurred were small historical parks, where visitation for all four parks averaged less than 200,000 in 1993. In addition, in many parks, the length of the tourist season has been expanding. Thus, not only are more people at many parks, but the length of time for which at least basic services must be provided is increasing. Table 3.2 shows the changes in visitation at the 12 parks we visited. Substantial increases in visitation drive up costs for many operations that directly support visitor activities, such as waste disposal; general maintenance and supplies; road, trail and campground repair; employee overtime; and utilities. Additionally, staff are sometimes diverted from other activities to manage the increasing crowds. For example, according to a park official at Bandelier, because of increased visitation, comfort stations must be cleaned more frequently and litter must also be picked up more often, resulting in the allocation of more of the park’s budget to maintenance personnel and less to resource management activities. Park officials at Bandelier also told us that especially on weekends, resource management, visitor protection, and interpretive staff are assigned to direct traffic or perform other crowd control activities. An official at Lake Mead told us that because of increased visitation and staffing limitations, some law enforcement rangers work 125 hours over a 2-week period and earn $2,000 to $3,000 per month in overtime during the summer. In total, the park officials at Lake Mead indicated that they spend about $150,000 annually on summer overtime for law enforcement rangers. In addition, the expansion of the visitor season has created increased demands on parks. At Glacier, for example, September visitation now rivals that of historically high June, and almost 20 percent of the park’s annual visitation occurs in September and October. Officials at many of the parks we visited spoke of an expanded visitor season. This expansion requires at least minimal visitor services and facilities for longer periods than had traditionally been the case. Combined with current budget and personnel ceilings, this expansion has sometimes necessitated cutting back on the scope and amount of services available during the peak season (e.g., fewer interpretive programs and shorter visitor center hours) or diverting staff from other activities to handle the longer visitor season. For example, at Glacier, the variety of walks and hikes offered during the peak season of fiscal year 1993 declined so that some services could be provided in September. Even so, officials at some of the parks told us that they are able to provide only a limited amount of visitor services during the extended season. Some park officials also told us that the increasing visitation levels and seasons are a major factor in absorbing the operating budget increases in recent years. The National Park Service received increased operating budgets from fiscal year 1985 through fiscal year 1993. During this time, the agency’s operating budget, adjusted for inflation, has increased about 14 percent. At 10 of the 12 parks we visited, funding increases outpaced inflation. However, at the same time, new requirements and demands that have seriously eroded the impact of these budget increases have been placed on the parks. These include additional operating requirements imposed on park managers by a number of laws and administrative requirements and additional operating demands associated with increasing levels of visitation. Cumulatively, these factors have contributed to declining levels of visitor services and resource management activities and have limited the parks’ ability to stem this decline. As a result, many park needs are not being met. National Park Service officials provided no comment on this chapter of the report. Many of the problems facing the National Park Service are not new. At the same time that visitor services are being cut back and parks are operating without sufficient information on the condition of many of their resources, the Park Service faces a multibillion-dollar maintenance backlog and, like all federal agencies, tight budgets. In addition, infrastructure and development needs for the park system continue to grow as new units are added—31 since 1985. Under these circumstances, an improvement in the short term is unlikely. Dealing with this situation calls for the Park Service, the administration, and the Congress to make difficult choices involving how the parks are funded and managed. However, regardless of which, if any, of these choices are made, the Park Service should seek to stretch available resources wherever possible by operating more efficiently, continuing to improve its financial management and performance measurement systems, and broadening the scope of its current restructuring plans. The choices available to deal with the conditions within the national park system center on three alternatives: (1) increasing the amount of financial resources for the parks, (2) limiting or reducing the number of units in the park system, and (3) reducing the level of visitor services. The alternatives can be considered individually or in combination. If the national park system is to maintain its size and traditional level of visitor services, additional financial resources will be necessary. Today, the annual operating budget for the national park system is over $1.1 billion. Of this amount, less than 8 percent is derived from revenues generated by entrance and other in-park fees. The Park Service estimates that during fiscal year 1995, it will receive about 33 cents in fees, on average, from each park visit. In comparison, it will cost the Park Service about $4.12 for each park visit. One way to increase financial resources to the parks is for the Congress to increase the Park Service’s annual appropriations. However, given today’s tight fiscal climate, it is unlikely that substantially increased federal appropriations will be available to fill the gap between park revenues and park operating expenses. To fill the gap, additional sources of revenues would have to be found. Sources of increased revenues to the parks could include (1) increasing park fees, (2) receiving better returns from in-park concessioners, and (3) encouraging park managers to be more entrepreneurial by providing them with authority to enter into partnership agreements with nonfederal entities. These alternatives are not new; the Park Service has initiated and/or supported similar proposals in the past. Increased park fees would come primarily from two sources— entrance admissions and fees for camping, backcountry, and other in-park activities. Regarding park entrance fees, 186 of the 368 park units charge entrance fees. Of those, fewer than 10 percent charge the maximum allowable admission rate of $3 per person or $5 per vehicle. Table 4.1 shows the fee status and fees charged at the 12 parks included in our study. In some cases, those park units that do not charge entrance fees are legislatively precluded from doing so. The Statue of Liberty National Monument and Ellis Island is one such example. On the basis of the number of visitors to the Statue of Liberty and Ellis Island in 1993, imposing an entrance fee of about $2 per visitor would allow the park to cover its operating costs. Using the same formula, we found that considerably higher fees would be required at other parks. For example, on the basis of 1993 visitation levels at Pecos National Historical Park, a fee of about $23 per visit would be required to fully fund operating costs. In addition to entrance fees, charging fees for an array of other in-park activities also presents opportunities to increase park revenues. Many in-park activities, such as fishing, backcountry hiking and camping, climbing, and commercial filmmaking, place special demands on park resources. These activities put pressure on the park’s human and physical resources, as well as infrastructure, beyond that created by visitors who merely go to the parks to look at the resources. Yet, additional costs associated with these activities are only partially passed on to the users of these services or not at all. For example, according to park officials, neither Harpers Ferry nor Antietam charges fees for issuing commercial filming permits, although they do recover any actual costs incurred because of the filming. (Harpers Ferry recovers costs only if the filming takes more than 2 days). In Glacier and Shenandoah National Parks, which have a substantial amount of backcountry camping, no fees are charged for the required permits. In 1994, Shenandoah issued over 8,300 permits for more than 23,000 people. Currently, throughout the national park system, fees cover only about 5 percent of the costs of providing in-park activities. Imposing fees where none exist and/or increasing fees at those park units that now have them may affect visitation. However, a recently published 1995 survey indicated that most people—79 percent—would not mind paying increased fees if the fees stayed within the park system. At the same time, while increasing the amount of fees going to the parks will not solve all of the parks’ financial problems, it could help stem the deteriorating conditions identified in this report and would shift some of the cost burden from general taxpayers to the beneficiaries of the services. While entrance fees may not be desirable or feasible at some units, to the extent that fees are permitted or increased, the revenues would need to stay within the park system and not be returned to the U.S. Treasury, as now occurs. In this regard, the Department of the Interior proposed in 1994 to increase park entrance fees for fiscal year 1995. However, the proposed legislation was not enacted. Interior has made a similar proposal in fiscal year 1995 that calls for the majority of the revenue generated from increased fees to be retained in the national park system. Better returns from concessioners’ contracts throughout the national park system would also expand the revenue base available to parks. Similar to entrance and user fees, increased revenues from concessioners’ contracts, if returned to the parks, could be used to help fund the parks’ operations. However, like entrance fees, for the parks to benefit from increased concession fees, these fees must remain in the Park Service and not be returned to the U.S. Treasury, as now occurs. Historically, the Park Service has not viewed concessioners’ contracts as business assets but as customer service obligations. Accordingly, the agency has not approached concessions management with the objective of realizing a fair return for the taxpayer. Instead, the return to the government has averaged under 3 percent of gross concession revenues. Current and past administrations have acknowledged that these returns are too low. Another way to expand the revenue base for operating and maintaining the national park system is to encourage more entrepreneurial approaches by park managers by providing them with more flexibility to enter into partnership agreements with the private sector and other parties. As pointed out in the administration’s report on the National Performance Review (NPR), private donations, even more than park fees and concessioners’ contracts, represent a source of untapped revenue for the Park Service. Although more than 200 nonprofit groups and many corporations give money to the parks, the Park Service is hindered in its dealings with them. Currently, park managers have no authority to directly solicit funds and may not enter into cooperative agreements with nonfederal partners unless specifically authorized by law. Donations can currently be made directly to the Park Service or through the National Park Foundation, which was established by the Congress in 1976 to solicit, accept, and administer donations for the benefit of the Park Service. At the park level, some Park Service officials believe that if provided with broader authority to enter into partnerships with nonfederal organizations and to solicit donations, the Park Service could be more entrepreneurial in its efforts to close the gap between its current funding sources and park needs. In lieu of, or in conjunction with, permitting an increased flow of revenues to the parks, another alternative that could be considered—assuming stable funding levels—is limiting or perhaps even cutting back on the number of units in the national park system. To the extent that the system is permitted to grow, associated infrastructure and development needs will also grow. As this occurs, more park units will be competing for the limited federal funding that is available. One way to help ease the financial pressures now facing the national park system until current park conditions can be adequately addressed is to limit the number of parks added to the system, perhaps by implementing a more rigorous review and approval process or by better defining what types of units should be included. Another way to ease financial pressures is to reduce the number of units currently in the system, taking into account the costs, benefits, and savings that would be achieved by specific decisions. In commenting on this report, Park Service officials stated that substantial cost savings could only be achieved by closing some of the largest park units, which is unlikely. Another alternative, in the absence of increased financial support, would be to reduce the level of visitor services provided by the parks to more closely match the level of services that can be realistically accomplished with available resources. This could include, for example, limiting operations to fewer hours per day or fewer days per year, limiting the number of visitors, or perhaps temporarily closing some facilities to public use. Regardless of which, if any, of the choices mentioned above are made, the Park Service should seek to stretch available resources by operating more efficiently, continuing to improve financial management and performance measurement systems, and broadening the scope of its current restructuring plans. While these actions alone will probably not be sufficient to meet all of the Park Service’s funding needs, they should result in increased efficiencies so that the Park Service can do more under current funding levels. As we reported earlier this year, our work, as well as that of Interior’s Inspector General, has shown that the Park Service lacks (1) necessary financial and program data on its operations, (2) adequate internal controls on how its funds are spent, and (3) performance measures on what is being accomplished with the money being spent. Accurate data and adequate financial controls are prerequisites for developing reliable management reports and measures of performance that could help the agency operate more efficiently. Accurate data, effective controls, and useful measures of performance would lower the agency’s costs by permitting managers to focus on results. The Park Service has reached agreement with the Department of the Interior’s Inspector General on how to address the concerns relating to necessary financial data and adequate internal controls. The Park Service is currently implementing an accounting system improvement project plan agreed to by the Inspector General. Additionally, the Park Service is in the process of developing reliable performance measures. With proper implementation of these management tools, the Park Service will be able to know (1) whether funds are being used for their intended purpose, (2) the nature and the extent of the problems associated with the resources it is mandated to protect and preserve, (3) the effectiveness of measures taken to deal with the problems, and (4) the activities and programs for which limited resources can be allocated to do the most good. The need for improved systems of performance management is particularly critical in light of the highly decentralized nature of the Park Service, where individual park managers have broad discretion to determine how to spend operating funds. Moreover, if the Park Service receives a broader revenue base by increasing fees, getting a higher return on concessioners’ contracts, and/or permitting park managers more flexibility to solicit funds by entering into partnerships with nonfederal entities, the need for better systems of performance management is even greater. Another way the Park Service can stretch its resources is to broaden the scope of its current plan for restructuring the agency. To respond to the streamlining objectives of the administration’s NPR initiative, the Park Service has prepared and is currently implementing a restructuring plan. Essentially, the restructuring involves relocating some headquarters personnel to field units and decentralizing certain functions while at the same time protecting on-the-ground employees who deliver services directly to the public. This plan is to be implemented over the next 4 fiscal years. As we testified in February 1995, we believe that the current plan should achieve some improvements; however, we are concerned that it does not go far enough because it only addresses gains to be derived from sharing resources within the Park Service. In our view, the current fiscal climate demands that the Park Service work with other federal land management agencies to reduce costs, increase efficiency, and improve service to the public by collocating or combining activities wherever possible. The Park Service has begun to do this with several agencies—federal, state, and local—and needs to continue to look beyond its own organizational boundaries and work closely with the Congress and other federal land management agencies to develop a coordinated interagency strategy to link Park Service reforms to reforms being proposed by other federal agencies. The ultimate goal of this strategy would be to coordinate and integrate the functions, systems, activities and programs of the Park Service with those of the other federal land management agencies so that they operate as a unit at the local level. Moreover, as its restructuring plan proceeds, the Park Service is now being asked to respond to the second phase of the NPR initiative. This second phase, announced in January 1995, is asking the Park Service to identify functions and programs that it could terminate, privatize, or devolve to state or local governments. To the extent that these determinations result in relieving the Park Service of functions or programs not essential to its mission, costs should be reduced. The national park system is at a crossroads. While more people are visiting parks, the scope and quality of the services available to these visitors are deteriorating. In addition, the National Park Service, as the steward for many of the nation’s natural and cultural treasures, has a myriad of problems to address, ranging from insufficient data on the conditions of resources to an ever-increasing, multibillion-dollar maintenance backlog. While the Park Service has recognized these problems and has taken some actions to address them, the magnitude of the problems calls for difficult choices to be made by the Park Service, the administration, and the Congress. Choosing among the various alternatives for funding and managing the parks will be difficult. However, unless choices are made, further cutbacks in visitor services will have to occur, and the Park Service’s ability to preserve and protect national treasures for the enjoyment of future generations may be in jeopardy. Regardless of which, if any, of these choices or combination of choices is made, the Park Service needs to continue to look for ways to stretch its resources by operating more efficiently and improving its financial management and performance measurement systems. National Park Service officials provided both technical clarifications and substantive comments on this chapter. The report was revised to reflect their comments. Substantively, Park Service officials stated that increased appropriations is an alternative for dealing with the parks’ lack of adequate financial resources but that our report implied it was not. We agree that increased appropriations is a choice. However, we think it is an unlikely one in today’s tight fiscal climate and have revised the report accordingly. In addition, Park Service officials mentioned that private capital is another alternative to increase revenues. We agree; private capital is already addressed as part of our discussion of possible ways to increase the flow of revenues to the parks. Specifically, we note that donations and partnerships with private entities could help close the funding gap in the parks. Park Service officials also commented that increasing fees at the national parks would not make the system self-sufficient, although they support the need for increased fees. They also said that there may be some units, such as Independence Hall, which should not charge fees because of their national significance. We agree that increasing fees is not going to fix all of the problems in the parks and have revised the report to reflect that point. We believe, however, that it is an alternative that can provide more revenue to parks. We also recognize that charging fees may be undesirable or infeasible at some units and have revised the report accordingly. Park Service officials further commented on our discussion of limiting and/or reducing the number of park units. They said that there is no evidence that the addition of new units has taken away from resources for existing units. We believe that given the current tight fiscal climate, future growth in appropriations is unlikely; accordingly, new units would be competing for available funds. The officials also said that closing units could be costly and that the size of units likely to be closed may not provide substantial cost savings. We agree that net cost-savings should be considered in any closure decisions and have revised the report to reflect this. Park Service officials also said that to achieve any substantial cost savings, large units would have to be closed, which is unlikely. This comment has been reflected in the report. Finally, Park Service officials identified efforts that they felt needed to be acknowledged in the report. We agree and have revised the report to acknowledge Park Service’s efforts in the areas of (1) developing various fee legislation proposals, (2) working in partnership with other agencies, and (3) addressing prior findings of the Inspector General relating to financial management and internal control issues.
Pursuant to a congressional request, GAO reviewed the current condition of 12 national park units, focusing on: (1) whether any deterioration in visitor services or park resources is occurring at the 12 units; (2) what factors contribute to the degradation of visitor services and parks' natural and cultural resources; and (3) the National Park Service's efforts in dealing with these problems. GAO found that: (1) there is cause for concern about the condition of national parks for both visitor services and resource management; (2) the overall level of visitor services is deteriorating at most parks; (3) services are being cut back and the condition of many trails, campgrounds, and other facilities are declining; (4) effective resource management is difficult because most park managers lack sufficient data to determine the overall condition of their parks' natural and cultural resources; (5) parks have difficulty meeting additional operating requirements and accomodating increased visitation; and (6) the Park Service is considering increasing the amount of financial resources going to parks, limiting or reducing the number of units in the park system, and reducing the level of visitor services to improve its financial management and performance measurement systems.
Securing the northern border while at the same time facilitating trade is the primary responsibility of various components within DHS, in collaboration with other federal, state, and local entities. CBP is the lead agency responsible for securing the nation’s borders while facilitating legitimate trade and travel. CBP’s Office of Field Operations is responsible for cargo and passenger processing activities related to security, trade, immigration, and agricultural inspection at air, land, and sea POEs. In addition, GSA oversees design, construction, and maintenance for all POEs in consultation with CBP. Within DOT, the Federal Highway Administration provides funding for highway and road construction and administers the Coordinated Border Infrastructure Program that provides funding to support the safe and efficient movement of motor vehicles across the land borders of the United States with Canada and Mexico. In executing its mission, CBP operates 166 land border POEs. Ownership of POEs varies by location. CBP’s land POE inventory consists of 166 ports, 99 owned by GSA, 22 leased by GSA, 1 owned by the National Park Service, and 43 owned by CBP. The remaining port is partially owned and leased by GSA. In general, the CBP-owned ports are small, rural, and characterized by low-traffic volumes. In contrast, GSA-owned ports are large, urban, and high-traffic volume ports. A majority (122 of 166) of land border crossings are located on the northern border, and vary considerably in size, location, and volume. See figure 1 for an example of a POE. In fiscal year 2005, the conference report accompanying DHS’s appropriation directed CBP to submit a master construction plan for fiscal years 2005 through 2009, including purpose, cost, and schedule details for each facility construction planned. Further, the Consolidated Appropriations Act, 2008, required DHS to prepare and submit a biennial National Land Border Security Plan. This plan was to include a vulnerability, risk, and threat assessment of each POE located on the northern border or the southern border, beginning in January 2009. Moreover, the DHS Appropriations Act for fiscal year 2009 required in fiscal year 2010 and thereafter that CBP’s annual budget submission for construction include, in consultation with GSA, a detailed 5-year plan for all federal land POE projects with a yearly update of total projected future funding needs. Additionally, to help address infrastructure constraints, in 2009, the American Recovery and Reinvestment Act appropriated $720 million for land POE modernization. DHS received $420 million for ports owned by CBP, which CBP plans to use for reconstruction, repairs, and alterations at land POEs. These funds will be used at 21 POEs located along the northern border. The act appropriated the remaining $300 million for the GSA-owned inventory, which is being used to provide design or construction funds to seven new or ongoing capital projects, four of which are along the northern border. Moreover, congressional interest in CBP’s ability to link resources to its mission led Congress to call on CBP to develop resource allocation models. In response to language in the conference report for the fiscal year 2007 DHS appropriation and the Security and Accountability for Every Port Act of 2006, CBP developed a staffing model for its land, air, and sea POEs. The conference report directed CBP to develop the staffing model in a way that would align officer resources with threats, vulnerabilities, and workload. The staffing model is designed to determine the optimum number of CBP officers that each POE needs to accomplish its mission responsibilities. Processing commercial vehicles at land POEs involves various steps and requirements. First, carriers are required to submit electronic lists describing what they are shipping, referred to as e-Manifests, to CBP prior to a shipment’s arrival at the border. CBP requires that e-Manifests for FAST shipments be submitted 30 minutes prior to arrival, while e- Manifests for non-FAST shipments must be submitted at least 1 hour before arrival. Second, CBP reviews the e-Manifest using its Automated Commercial Environment (ACE) database, among others, and assigns a risk level to the shipment, a process known as pre-vetting. Next, when the commercial truck proceeds into the United States, it must go to the primary inspection booth at the U.S. POEs, where a CBP officer reviews documentation on the exporter, importer, and goods being transported. If the truck’s documentation is consistent with CBP requirements and no further inspections are required, the truck is allowed to pass through the port. Depending on the POE, goods imported, or law enforcement requirements, CBP may direct the commercial truck to secondary inspection. According to CBP, trucks are referred to secondary inspection for numerous reasons, such as officer’s initiative based on experience and training, targeted inspection, or random inspection. Secondary inspection involves more detailed document processing and examinations using other methods, such as the Vehicle and Cargo Inspection System (VACIS), a gamma ray imaging system used to detect various forms of contraband, including explosives and drugs in commercial vehicles; advanced radiation portal monitor (RPM), a vehicle monitoring system used to detect nuclear and radiological materials; or unloading and physical inspection. Trucks that require secondary inspection are inspected by CBP and may be inspected by more than one federal agency, depending on their cargo. For example, FDA, under HHS, and the Food Safety and Inspection Service (FSIS), under the Department of Agriculture, have primary responsibility for food safety. FDA is responsible for the safety of virtually all foods, including milk, seafood, fruits, and vegetables. FSIS oversees the safety of meat, poultry, and processed egg products, both domestic and imported, and verifies that shipments of these products meet FSIS requirements. Figure 2 shows the cargo processing steps at land POE crossings. CBP launched the FAST program in 2002 to include electronic and semi- electronic automated processing for preapproved shipments. The FAST program is intended to secure and facilitate legitimate trade by providing expedited processing of participants’ merchandise in designated traffic lanes at select border sites, fewer referrals to secondary inspections, “front-of-the-line” processing in secondary CBP inspections, and enhanced security. FAST shipments are screened through advanced manifest reviews and targeting, nonintrusive inspections, canine sweeps, and random exams. To be eligible to receive the benefits of the FAST program, every link in the supply chain—the carrier, the importer, and the manufacturer—is required to be certified under the Customs and Trade Partnership Against Terrorism (C-TPAT) program and the driver must be pre-vetted in the FAST program. C-TPAT is a customs-to-business partnership program that provides benefits to supply chain companies that agree to comply with predetermined security measures. We reported in August 2008 that all C-TPAT participants—the carrier, importer, and manufacturer—are vetted prior to enrollment and are required to certify that they meet program minimum security requirements, such as a secure area to store trailers to prevent unauthorized access or manipulation. Additionally, the (1) driver is required to have a pre-vetted FAST card, (2) truck is required to have a transponder, (3) truck cannot be carrying shipments with loads from multiple shippers that are not C-TPAT certified, and (4) e-Manifest is required to be submitted to CBP 30 minutes prior to arrival at the port. There are approximately 90,000 FAST drivers and 9,830 C-TPAT members, of which 4,400 are importers and 2,721 are carriers. The remaining 2,709 C-TPAT members are brokers, consolidators, and foreign manufacturers. FAST participation has increased substantially since CBP launched the program. However, the number of FAST participants decreased slightly in 2009, as shown in figure 3. All 122 northern border POEs and lanes can process FAST shipments in ACE, but 7 POEs on the northern border have FAST-dedicated lanes. ACE tracks shipments by the types of manifests trucks use to report their shipments. FAST shipments are processed in ACE using two of the various types of manifests—National Customs Automation Program (NCAP), limited to certain types of FAST shipments, and Pre-Arrival Possessing System (PAPS), used by non-FAST and FAST shipments. According to CBP officials, the FAST/NCAP shipment provides limited information compared to a standard e-Manifest and no entry record is filed at the time the shipment is released. For example, the FAST/NCAP manifest does not include the driver information, trailer license plate number, or the quantity of shipment. The driver information and trailer license plate number can be added to the manifest by CBP at the primary inspection point. However, the quantity of shipment must be recorded by the broker when the entry is filed within 10 days of crossing the border. According to CBP, the FAST/NCAP manifest is used primarily by the auto industry. In contrast, the PAPS shipment uses a complete data set, including all the information CBP requires, such as driver information, trailer license plate number, and the quantity of shipments. Additionally, an entry record must be on file before a shipment is released. Approximately 60 percent of FAST shipments are PAPS shipments. CBP is limited in its ability to accurately quantify the impacts of staffing and infrastructure on wait times because its wait times data are collected using inconsistent methods and are unreliable. CBP defines border wait time as the time it takes for a vehicle to travel from the end of the queue to the CBP primary inspection point. CBP calculates and reports wait times hourly at 28 major land POEs along the northern border. In October 2007, CBP issued interim guidance on approved methods for measuring wait times at land POEs. The guidance outlined various methods for calculating wait times, including (1) line of sight—CBP officials at the port estimate wait times based on volume, number of lanes open, and landmarks that identify the end of the line to the naked eye or camera; (2) benchmark— CBP officials at the port and stakeholders identify various benchmarks and measure wait times from the end of the traffic line to the primary inspection booth based on the number of lanes open and the benchmark points; (3) license plate reader—CBP officials at the port manually record the license plate of the last vehicle in line and then run the plate in TECS to identify when the plate was processed at primary inspection; and (4) driver surveys—when the end of the line is no longer visible, CBP officials at the port use driver surveys to estimate wait times. Drivers arriving at primary inspection are asked by the CBP officer how long they have been waiting in the queue. CBP officials at the port take an average of the survey results to estimate wait times. The six POEs we visited use one or more of the methods described above to measure wait times. Because the wait times are estimated using approximations of varying reliability at selected POEs, the data cannot be used for analyses across ports, and the methods of collection raise questions about the reliability of the overall data. CBP officials stated that all wait time measures are collected and coordinated with local bridge authorities and regional traffic management centers for concurrence prior to posting. However, some CBP officials as well as 13 of the 15 importers, trade organizations, and border stakeholders we spoke with about the accuracy of CBP’s wait times raised questions about the accuracy and reliability of CBP’s wait times data. For example, the CBP officer responsible for maintaining the Border Wait Times database stated that the accuracy of the wait times data varies depending on the method used to collect the data. Specifically, the official stated that driver surveys were subjective, and that impatient drivers may not provide accurate times spent in the queue. Further, a CBP official working on the wait times pilot project stated that manual measurement of wait times data is time consuming for staff, inaccurate, and could be improved. Commerce stated that the methods used to measure border wait times are subjective and therefore, the data vary in their reliability. Moreover, 12 other border stakeholders, trade organizations, and importers told us that industry organizations do not use CBP’s wait times data because they question the accuracy of the data. According to CBP, it uses several methods to measure wait times due to the infrastructure and port layout at land POEs. However, the formulas used to estimate wait times are not consistently updated. Further, because lane use varies at the POEs depending on traffic level and infrastructure, it may be difficult to obtain accurate wait times for passenger and commercial vehicles when all traffic share the same lane. Additionally, prior to April 2006, CBP’s Border Wait Time database did not delineate between wait time data for NEXUS and FAST lanes at several POEs. As a result, wait times data for these programs were recorded within a single data element. Because of these factors, the data cannot be used for analyses across POEs or at individual ports, and the methods of collection raise questions about the reliability of the overall data. Standards for internal control require that all transactions be clearly documented in a manner that is complete, accurate, and useful for managers and others involved in evaluating operations. Moreover, internal control standards call for agencies to establish policies and procedures to ensure the validity and reliability of data. CBP acknowledged that the current methodology for measuring private and commercial vehicle wait times is not ideal, and has initiated a pilot project to automate wait times measurement and to improve the accuracy and consistency of the data collected. The wait times pilot project is a binational interagency initiative led by the Border Wait Times Work Group made up of representatives from CBP, the Canada Border Services Agency, the Federal Highway Administration, and Transport Canada. CBP and DOT officials anticipate spending approximately $2 million on the pilot project, and CBP and Transport Canada have committed to funding 50 percent of the cost. The initial goal of the pilot project is to identify and test up to eight potential technology solutions for automating the measurement of border wait times for passengers and commercial vehicles at two land border locations, the Peace Bridge between Buffalo, New York, and Ft. Erie, Ontario, and the Pacific Highway crossing between Blaine, Washington, and Douglas, British Columbia. The pilot also intends to implement two long-term technology solutions at one or more land border crossings along the U.S.-Canadian border. According to DOT, if the pilot project is successful, the selected pilot technologies will remain in place for approximately 1 year at the designated sites until further funding is identified. The objectives of the project are to measure wait times in both directions for cars and trucks, determine real-time and predictive capabilities, replace the manual process for calculating wait times, and explore long-term operations. According to DOT, the test sites were selected based on several criteria, including traffic types, volume, wait time variability and frequency, site characteristics, and willingness of site operators to participate in the pilot project. The initial technology deployment is scheduled to occur in the summer of 2010. As of April 2010, the Border Wait Times Work Group had selected four vendor technology solutions, including traffic radar and Bluetooth, for phase I testing. According to CBP, during phase I testing, the technology solutions will be installed and testing will occur for about 30 days. If phase I testing and evaluation is successful, the technology wait time measurement solutions will be deployed at the national level during phase II pending funding. CBP expects to complete the pilot project by the summer of 2011. Using a consistent methodology, such as a standard formula and automation, to measure wait times across all ports could better position CBP to analyze trends in wait times across land POEs. CBP and GSA officials report considering wait times as well as other factors in determining staffing, managing traffic workload, and infrastructure investments. Without reliable wait times data, CBP and others are unable to quantitatively determine the extent to which staffing and infrastructure constraints affect wait times, or readily estimate the costs of border delays. Having accurate border wait times data could better position CBP to allocate the needed resources to POEs and better manage those operations. Moreover, CBP and DOT officials we interviewed cited a range of potential benefits that may result from automating border wait times measurement, such as (1) reducing the burden of manually collecting wait times data by customs staff; (2) increasing the accuracy, reliability, and timeliness of the wait times data collected and disseminated; (3) improving the agency’s transparency by enabling land border wait times to be easily shared with participating agencies and regional traffic management centers; (4) improving customer service by increasing available staff for other port tasks; and (5) reducing delays in freight movement. Additionally, a CBP official working on the pilot project told us that automating wait times measurement to improve the data quality will facilitate better management decisions regarding staffing needs and infrastructure investment at land POEs. CBP officials at the 6 POEs we visited and the 14 border stakeholders, importers, and trade organizations we spoke with about wait times agreed that, in general, wait times for commercial vehicles along the northern border have decreased since 2007. They credit reduced wait times, in part, to the economic recession, which resulted in reduced passenger and truck traffic, and staffing and infrastructure improvements. Border wait times are influenced by multiple factors, including infrastructure available, staffing, traffic volume, and time of the year, including holiday travel and special events. Our analysis of DOT data shows that total truck crossings along the northern border decreased from about 7 million in 2005 to 5 million in 2009 (see fig. 4). This trend is also reflected in passenger crossing data. The total number of passenger crossings along the northern border declined from about 63 million in 2005 to 53 million in 2009. Although the economic downturn has reduced traffic volume and wait times, border delays were an issue before the recession. For example, the summer of 2007 saw the longest delays since the terrorist attacks in 2001, according to CBP and trade organizations. During this period, Port Huron, Michigan, regularly had delays that exceeded 1 hour, where the wait extended to the Blue Water Bridge from Canada into the United States, according to CBP officials, border stakeholders, and trade organizations that we interviewed. CBP officials in Detroit, Michigan, and Buffalo, New York, also reported having similar delays of over 1 hour during the summer of 2007 due to high traffic volume and infrastructure issues. Figure 5 shows trucks queuing on the Ambassador Bridge in 2007. Longer wait times at the border represent an increase in the cost of travel, which may lead people to make fewer trips. Conversely, shorter wait times represent a decrease in the cost of travel, which may lead people to make more trips. According to a number of analyses of cross-border travel, such delays can result in additional expenses for industry and consumers stemming from increased carrier costs, inventory costs, labor costs, problems with inventory, and resulting reduction in trade and output. For example, many manufacturing industries on both sides of the border manage their inventories using just-in-time management, a system that allows companies to ship goods just before they are needed and keep inventories and warehousing costs lower. Studies indicated that delays at the border affect delivery of shipments, and could have major consequences to industries that are time sensitive. Examples of time- sensitive industries that are reliant on just-in-time inventories and more vulnerable to supply disruptions include the automotive industry of the Great Lakes region and companies trading manufactured goods. Studies show that congestion can affect just-in-time delivery schedules. For example, according to a July 2009 Brookings Institution report, unexpected delays forced assembly lines to slow down and in some cases stop when the parts they need did not arrive on time. CBP has increased staffing levels at northern border POEs to reduce wait times and improve operations, but is challenged in balancing increased staffing with training needs. Staffing levels along the northern border have increased by 47 percent from fiscal years 2003 to 2010 and, as a result, CBP officials at the six ports we visited told us that they are better able to staff all available primary processing lanes when needed, which increases throughput and decreases wait times. For example, CBP management in Blaine, Washington; Buffalo, New York; and Detroit, Michigan, said that although they struggled with staffing issues in the past, presently, their staffing needs are met. CBP officials attributed increased staffing levels to various factors, including recent recruitment efforts and improved retirement benefits for CBP officers. To estimate its staffing needs, CBP uses a workload staffing model along with other information, such as input from CBP field offices. According to CBP, the model assesses staffing needs based on factors including traffic volume; workload data; processing times; expected time away for holidays, leave, training, and temporary duty assignments; task complexity; and threat levels, and then calculates the possible number of full-time equivalent CBP officers for each POE. CBP field offices also conduct their own staffing needs assessments by POE. CBP considers requests from field offices along with the model to determine staffing levels. According to CBP, since the model does not capture the complexity of the operations at the ports, such as wait times, projected traffic volumes, the implementation of new programs, facility expansions, and special enforcement initiatives, final decisions about resource requests and allocations are made in consultation with operational managers and program managers at the POEs and headquarters. Once final decisions on staffing needs are made by CBP headquarters, the agency allocates staffing resources to each POE. According to CBP, the directors of field operations have the ability to place CBP officers where they are needed to meet operational needs. CBP management at the six POEs we visited stated that they determine staffing needs based on workload, enforcement efforts, and other factors, including wait times, holidays, and local events. As of the end of fiscal year 2009, results of the model for the northern border land POEs showed a recommended level of staffing that was higher than the number of CBP officers on board. The model estimated that CBP needed 4,207 CBP officers while CBP had 3,927 officers on board at the end of fiscal year 2009. However, CBP reiterated that the model does not capture the complexity of land border operations, nor does it accurately determine resource requirements at the local level. For example, because the model does not take into account projected traffic volumes, it would not have accounted for the initial impacts of the economic recession. Therefore, CBP does not believe that northern border land POEs are understaffed based on the staffing model results. Moreover, CBP officials report that staffing has increased from 2,777 in fiscal year 2003 to 4,151 in fiscal year 2009 (see fig. 6 for more details). In fiscal year 2009, CBP undertook a “hiring surge,” which resulted in an additional 285 staff for northern border land POEs. Due to CBP’s hiring effort, CBP officials report that northern border field offices received additional staff allocations. The Seattle, Washington; Detroit, Michigan; and Buffalo, New York, field offices received a majority of the new staff, as 238 of 285 positions were allocated to these three offices. Although CBP has taken actions to begin to address the effect of staffing constraints on wait times, it faces challenges in providing training to its officers. Newly hired CBP officers undergo multiple training programs consisting of pre-academy orientation, academy, and post-academy programs. Pre-academy orientation—new officers attend pre-academy orientation at their duty stations prior to attending the academy training. The orientation provides new officers with an overview of the job, including port operations and trade enforcement and facilitation. Academy—new officers are required to complete a 73-day training program at the Federal Law Enforcement Training Center in Glynco, Georgia. This training consists of classroom, laboratory, and practical exercises to ensure that the trainees are able to perform the job. Post-academy—after completing academy training, new officers are required to complete 12 to 14 weeks of post-academy training to gain on-the-job training (OJT) at their respective POEs. We reported in November 2007 that CBP faced challenges in providing the required training and lacked the data needed to assess whether new officers demonstrate proficiency in required skills. We reported that while CBP requires at least 12 weeks of OJT, new officers at the POEs visited did not receive 12 weeks of training. Moreover, we reported that when staff do not receive required training or are not trained consistent with program guidance, knowledge building is limited and the risk that needed expertise is not developed is increased. The lack of experience, combined with incomplete training, can contribute to delays at primary points of inspection and unnecessary referrals to secondary inspections. Moreover, it increases the risk of incomplete or faulty inspections. We recommended that CBP incorporate into its procedures for its OJT program specific tasks that CBP officers must experience during OJT and requirements for measuring officer proficiency in performing those tasks. CBP officials have begun to take actions to address these recommendations by, among other things, developing OJT proficiencies that CBP officers must demonstrate before CBP certifies that the officers’ OJT is complete. However, at five of six POEs we visited, CBP officers were not receiving the required 12 to 14 weeks of OJT. The length of training provided ranged from 3 to 10 weeks at ports we visited rather than the 12 to 14 weeks required by CBP’s post-academy training guidance. Table 1 shows the duration of training provided to new officers at the six ports we visited. For example, CBP managers at one POE we visited stated that, in general, new officers receive 3 weeks of OJT. Officers also spend 2 to 4 weeks in a mentoring program. However, as a result of the recent staffing increase and the need to train more officers, the mentoring program at this POE has been reduced from 3 to 4 months to about 2 to 4 weeks. Moreover, CBP line officers at the same POE said that 2 weeks of mentoring is not sufficient time to train new officers. CBP managers at another POE said that new officers receive about 10 weeks of OJT. CBP officers at this POE stated that due to the large number of new staff requiring training and the need to balance this demand with port operations, the new officer OJT program has been reduced from 12 to 14 weeks to 6 weeks. Also, officials at another POE told us that on average, new hires receive at least 8 weeks of OJT. CBP stated that trainees in all POEs are required to complete the same post-academy training program and that deviations from the prescribed post-academy training program are not authorized. However, CBP training officials stated that depending on staffing levels, field offices may fast-track training to get new officers on the line to balance the need to provide training with facilitating the flow of commerce. Although CBP officials at the six POEs we visited told us that staffing was adequate, CBP managers at four of six POEs said that it was a challenge to balance training needs with operational demands. For example, CBP managers at two POEs told us that they limit the number of officers sent off-site for training during peak seasons because it affects staffing level and port operations. According to CBP managers at one POE we visited, training new officers is expensive because the agency needs extra staff during each shift that training occurs. They told us that the agency does not have the capability to properly train the surge of new officers brought onboard due to recent hiring efforts because there is a shortage of experienced staff available to train new hires at the POEs. As a result, new officers are often trained by less experienced officers than before. Officers also told us that, in some instances, new officers are assigned to their duty stations without completing the required field training. For example, at one location, CBP line officers told us that although new officers receive a training checklist that supervisors are supposed to certify, typically supervisors do not certify that the training checklist has been completed before new officers are assigned to duty stations. Internal control standards related to human capital management state that management should ensure that the organization has a workforce that has the skills necessary to achieve organizational goals. According to CBP officials responsible for training, staffing and meeting operational demands are the greatest challenges in training new hires. CBP officials in headquarters responsible for planning training stated that when ports undergo a hiring surge, it can be difficult for them to train the new officers. CBP officials also noted that ports need to staff extra officers to cover for field trainers and officers receiving training. For example, field trainers are officers taken off the line to train new hires. Additionally, CBP officials said that it is difficult to provide training during peak seasons when traffic volumes are high, and that field training may be limited due to capacity issues or availability of space at the POEs. CBP officials said they recognize that training is a challenge at POEs, and launched an enhanced tracking system in April 2010 to monitor the various stages of training, including pre-academy, basic academy, and post-academy training. According to CBP officials, with the system enhancement, they will be able to track delivery of training and work with field offices that are not meeting identified training needs. Further, CBP training officials told us that they plan to address the issue related to the need for more experienced field trainers by developing a certification program, which is being developed in two stages. The first stage, related to pre-academy training, was piloted in April 2010. The second stage, related to post- academy training, will be piloted and completed in September 2010. In addition, CBP reported that in May 2009 the agency designed and began implementing a new training approach known as the Federal Career Internship Program for CBP Officers. According to CBP, the newly piloted program consists of 3-week pre-academy, 85-day basic training, and post-academy training. CBP officials explained that depending on the new hire’s POE assignment, the new post-academy program may consist of specific training in land operations, air and sea operations, or cargo operations. Additionally, CBP officials stated that it will use its enhanced tracking system to track all phases of the new training curriculum locally, in the field offices, and at headquarters. Further, CBP officials believe that the new post-academy curriculum and enhanced tracking system will help to eliminate variance among ports of the same environment in the way post-academy training is conducted. The pilots of the new curriculum are planned to be implemented in 2010 and final launch is planned for fiscal year 2011. CBP’s process for identifying and prioritizing capital infrastructure needs at land POEs consists of several steps, including gathering data using the SRA process, ranking the facilities by identified needs, conducting an analysis on the initial ranking of needs, assessing project feasibility and risk; and establishing a capital investment plan. During the SRA, CBP evaluates the facility against more than 60 criteria to identify deficiencies that affect the following categories: mission and operations, security and life safety, space and site deficiency, and personnel and workload growth. CBP conducted an SRA of every land POE along both the northern and southern borders from fiscal years 2003 through 2006. CBP has concluded that most of the inspection facilities are outdated and were designed to accomplish legacy missions. On the basis of the assessments, CBP estimates that it will need over $6 billion during the next 10 years to modernize the land POE inventory to meet the operational requirements in a post- 9/11 environment and the workload demands of the 21st century. CBP began another round of SRAs in fiscal year 2008, and completion is scheduled for fiscal year 2011. CBP and GSA have plans to make infrastructure improvements at a number of land POEs along the northern border designed to ease congestion, improve inspection capacity, and increase throughput. Over the next 5 years, CBP will have infrastructure projects related to 35 different northern border land POEs in various stages of development. Five of the 6 ports we visited have infrastructure improvement projects scheduled or pending approval. For example, CBP and other stakeholders initiated a project to expand and modernize the Blue Water Bridge plaza in Port Huron, Michigan, to alleviate congestion, eliminate bottlenecks, and enhance security. The project involves a complete redesign and construction of the bridge plaza, including all facilities utilized by CBP, the bridge owner, the Michigan Department of Transportation (MDOT), and other federal agencies. The Environmental Impact Study was approved in March 2009, and CBP expects construction to begin in early 2016, with completion projected for 2019. CBP estimates that the project will cost over $500 million. After the expansion, the facility is planned to increase from 12 to 56 acres, and the number of primary lanes is expected to increase from 13 to 24, which CBP officials said will result in increased throughput and reduced congestion. According to CBP, 15 of 24 lanes will be equipped with high-low booths to process passenger (“low”) and commercial traffic (“high”), and 9 lanes will be dedicated to passenger vehicles to meet CBP’s operational requirements. In another example, infrastructure improvements are also planned for the Lewiston-Queenston crossing in Buffalo, New York. According to the May 2008 Lewiston-Queenston Facility feasibility study, the primary inspection lanes are inadequate to handle passenger and commercial vehicle traffic and improvements are needed (see fig. 7 for an aerial view of the Lewiston-Queenston Bridge Facility). The study further concluded that there are too few commercial inspection docks at Lewiston, and that the docks are undersized. At present, there are four commercial inspection docks and CBP plans to construct eight additional docks during renovation. CBP and the Niagara Falls Bridge Commission estimate that the Lewiston-Queenston renovation will cost about $117 million. According to CBP officials, CBP is planning to expand the Lewiston-Queenston Bridge Facility, but the design and construction remain unfunded. Once funding is available, CBP expects design to be completed within 12 to 18 months and construction within 24 months. In the interim, the Lewiston-Queenston Bridge facility is scheduled to receive $1 million in fiscal years 2010 and 2011 to renovate the administration building, build a new secondary processing area, and make other improvements. on the specific demand, and maximizes available space (see fig. 8 for an example of high-low booths). Nine of 13 lanes at Port Huron, Michigan, were modified to equip them with high-low booths, and the Lewiston-Queenston Bridge Facility was remodeled to include high-low booths for either cars or trucks, where lanes change as needed based on traffic composition. Additionally, the Niagara Falls Bridge Commission increased the capacity of the bridge from four lanes to five lanes. As a result, there are now three U.S.-bound lanes—one for FAST, one for commercial vehicles, and one for personal cars (see figs. 9 and 10). Moreover, CBP increased the number of primary lanes at the Ambassador Bridge Fort Street Cargo Facility and the Ambassador Bridge Plaza, which according to CBP has helped to ease traffic congestion and reduce delays. For example, in June 2008, the Ambassador Bridge Plaza was expanded from 12 to 19 primary lanes. According to CBP and the Ambassador Bridge Authority, the expansion helped to improve traffic flow and reduce congestion on the bridge. Further, in 2004, CBP increased the commercial processing capacity of the Ambassador Bridge Fort Street Cargo Facility by adding seven primary processing booths. Despite these incremental infrastructure changes, however, CBP officials at the six ports we visited told us that additional processing capacity is needed to accommodate projected traffic flows. As discussed earlier in the report, five of six ports we visited have infrastructure improvement projects planned or pending approval. CBP has also deployed automated license plate and document readers as well as other technology at the six POEs we visited, which CBP officials said have helped to facilitate vehicle processing. License plate readers automatically read front and rear license plates of vehicles as they enter the primary inspection area, with the data simultaneously queried against CBP and law enforcement databases. CBP has installed technology that can read documents enabled with Radio Frequency Identification Device (RFID) technology, which according to CBP speeds up processing. A driver who has a FAST card, for expedited processing, holds up the RFID- enabled card before entering the booth. As a result, the driver’s information is automatically displayed on the screen before the driver approaches the primary inspection booth. In addition, CBP officials said that use of nonintrusive technologies, such as the VACIS and RPM, have allowed CBP to inspect more shipments efficiently and reduced the number of physical inspections, which can be costly and time consuming. These technologies allow CBP to inspect cargo without having to perform a time-intensive manual search or other intrusive examinations of the contents. For example, CBP officials at the Peace Bridge told us that they scan over 100 commercial shipments a day using VACIS; however, prior to deploying VACIS, CBP officials said they unloaded and inspected only 12 commercial vehicles a day (fig. 11 shows a picture of a mobile VACIS). In general, CBP can use VACIS to avoid unloading of the contents of a truck, but at certain times a CBP officer may determine that a physical search is necessary. Prior to the deployment of the current version of ACE, deployed in 2006, CBP did not receive advance e-Manifest on trucks crossing at land POEs. As a result, decisions on whether to release, examine, or detain a shipment had to be made at the primary inspection booth. With the deployment of new technologies such as ACE, CBP officials told us that officers spend less time manually inputting information, thereby reducing inspection times and improving the accuracy of the collected information. All of CBP’s land border POEs are capable of receiving and processing e-Manifests as part of ACE. Moreover, according to CBP officials, more shipments are released at the primary inspection booth as a result of ACE and advance information provided via e-Manifest. Despite the incremental infrastructure improvements discussed earlier, CBP officials told us that limited space and equipment continue to affect CBP’s inspection of commercial vehicles and operations at the six ports we visited. The Peace Bridge site is one of the busiest commercial crossings between the United States and Canada, yet existing border infrastructure at Peace Bridge contributes to a number of crossing inefficiencies, according to CBP. The facility, which is considered a large port, is located on 17 acres of land, as opposed to the 80 acres that CBP recommends for a large POE (see fig. 14). According to CBP, the port does not have the space to handle the number of vehicles referred for secondary inspections. The plaza is spatially constrained and lacks the space needed for the enclosed VACIS equipment to screen cargo vehicles in secondary inspections. As a result, officers can screen one commercial vehicle at a time. CBP officials told us that if the secondary inspection area is full, CBP officers must hold vehicles referred for secondary inspection in the primary lane, causing congestion and delays. In addition, we observed that because of the configuration of the port, vehicles referred to secondary inspections must cross paths with commercial vehicles exiting the primary inspection area, which contributes to border crossing inefficiencies and creates an obstructive intersection and safety and security risks. CBP and GSA are planning to expand and modernize the Peace Bridge Facility, but they have not yet requested funding for the facility due to federal budgetary scorekeeping rules governing leases. However, once funding is available, CBP and GSA expect the design to be completed within 12 to 18 months and construction within 24 to 36 months. As another example, the Lewiston-Queenston POE was constructed in the early 1960s and, with the exception of a few modifications (such as the increase in lanes from four to five), has remained unchanged, although security measures and traffic volume have increased over time. CBP has concluded that the main building and commercial building are too small to handle current operations and can no longer accommodate either the traffic or the complexity of processing operations required since 9/11. Specifically, CBP has concluded that there are inadequate primary inspection lanes to process car and truck traffic, the commercial inspection docks are undersized, and the secondary processing facilities are dated. For example, CBP noted that the work space is insufficient to accommodate existing staff and operations. In addition, the work areas are small and overcrowded, and there is no room for additional staff or functions. CBP and GSA are planning to expand the Lewiston-Queenston Bridge Facility, but they have not yet requested funding for the facility due to federal budgetary scorekeeping rules governing leases. However, once funding is available, CBP and GSA expect design to be completed within 12 to 18 months and construction within 24 months. The Pacific Highway facility in Blaine, Washington, is one of the largest POEs for cargo processing on the northern border, and has three commercial inspection lanes. CBP managers stated that the Pacific Highway crossing needs more lanes to increase throughput, but the facility lacks the space needed to expand. According to CBP, there is limited room to expand without acquiring additional property. In addition to limited lanes, there is minimal staging area for trucks waiting for secondary inspections. When this occurs, the placement of VACIS causes backups. CBP officials told us that three trucks can queue at once for VACIS scans. When more than three trucks are referred to VACIS, CBP does not have space available on the plaza to queue additional vehicles and traffic blocks the primary lanes. Officials said this happens on a daily basis. As a result, when this happens, CBP officers told us that the primary officer has to decide whether to refer the shipment to secondary inspection, causing the lanes to shut down, or to keep traffic moving, facilitating the flow of commerce. According to CBP officials in Blaine, Washington, as the economy improves, infrastructure constraints will exacerbate delays at the port. According to the Port Director at Port Huron, the lack of adequate physical space and infrastructure adversely affects port operations. CBP has concluded that the site size is inadequate to support operations. Specifically, officials stated that the facility is too small, with limited parking and space to off-load trucks, forcing officers to escort trucks elsewhere to be searched. CBP officials stated that they have to dedicate two staff to escort shipments to an off-site location for unloading and inspection, which according to CBP is a security risk and takes staffing resources away from other critical port functions. Further, CBP officials explained that after the construction of the new plaza and cargo inspection facility, CBP will be able to inspect cargo on-site and will save on resources devoted to escorting trucks to off-site facilities. CBP officials stated and we observed that the facility has 22 inspection docks and they are too small to meet the inspection needs of the POE. CBP officers told us that the contents of a truck can take up the entire length of all the docks. We observed that Port Huron’s 12-acre elevated inspection area, which sits 26 feet above ground, serves as the on and off ramps for the Blue Water Bridge from Interstates 69 and 94. The port is surrounded by commercial and residential developments, thus limiting CBP’s ability to expand the plaza or add more lanes. CBP and MDOT have initiated plans to renovate Port Huron to alleviate congestion, reduce wait times, eliminate bottlenecks, and improve the inspection capacity. CBP expects construction to begin in 2016, with completion projected for 2019. Moreover, CBP officials told us that although CBP recently made some infrastructure improvements at the Ambassador Bridge Fort Street Cargo facility, challenges remain. For example, due to limited physical space, we observed that the placement of VACIS causes backups in secondary inspections that slows throughput and the secondary RPM is placed directly in front of the VACIS machine. In addition to the location of the VACIS machine, all vehicles form one queue for screening. As a result, a shipment referred to secondary inspection for advanced RPM screening may be delayed if the VACIS machine is being used. CBP officials also told us that a wall surrounding the Ambassador Bridge Fort Street inspection plaza and the placement of one of the primary inspection booths (“lane 10”) limits access to the dedicated FAST booths, as shown in figure 12. As a result, FAST trucks have to form a single queue and curve around both the wall and lane 10 to access the four dedicated FAST booths. CBP officials told us that they plan to improve access to the FAST lanes and increase throughput by expanding the queuing space available by removing the wall. Construction is expected to commence in September 2010 and completion is scheduled for November 2010. Although CBP has a process for prioritizing infrastructure needs, it faces challenges in addressing identified issues, according to CBP officials responsible for infrastructure improvements. CBP works with GSA to coordinate infrastructure projects with other stakeholders, such as private bridge authorities and state departments of transportation. The process for making capital improvement projects, such as building new lanes or secondary inspection facilities, is lengthy. According to CBP and GSA officials, the process for submitting a request for an infrastructure improvement and completion of the project is approximately 7 years. For example, CBP officials told us that the Peace Bridge improvement project that occurred in 2005 took at least 5 years from start to completion. Prior to every construction project, GSA conducts a feasibility study—the study defines the project’s scope, including the budget; the amount of land required; the basic design; and the environmental challenges as well as the community impact. GSA officials told us that they use the results of the feasibility studies to justify the funding requests submitted to the Office of Management and Budget (OMB). See figure 13 for GSA’s land POE capital program delivery process. Furthermore, CBP and GSA officials said that land constraints affect their ability to make infrastructure improvements. For example, CBP officials said that they have been discussing plans to expand the Peace Bridge Facility for the past 10 years. Although CBP recognizes that increasing the size of the Peace Bridge Inspection Facility is necessary to address capacity issues, there is limited room adjacent to the facility for expansion without affecting the surrounding community. The facility sits on 17 acres and is confined on three sides by the Niagara River, a historic park, and a residential neighborhood. See figure 14 for an overhead view of the Peace Bridge Facility. Further, the Port Huron Facility is scheduled for renovation starting in fiscal year 2016 with completion in 2019. Due to the lack of space for expansion, CBP officials told us that MDOT used eminent domain law—the government’s power to take private property for a public use while fairly compensating the property owner—to purchase nearby homes and businesses to acquire land for the plaza expansion project. According to GSA officials, securing funding for infrastructure projects is also dependent on the annual budget cycle. On average, it takes about 18 months to obtain funding for large projects after GSA submits its proposal to OMB for approval. GSA officials also told us that they may not get the full amount of funds requested for infrastructure projects due to competing priorities, which affects their ability to make infrastructure changes, such as resizing the roads leading to the POEs. Table 2 provides information on GSA funding requests and appropriations for the POE capital investment and leasing program for fiscal years 2003 through 2010. Additionally, CBP and GSA officials said that they have to coordinate with multiple stakeholders, including city and state governments, to address infrastructure needs because the bridges and roads leading to the POEs are owned by private entities or state governments. GSA officials noted that coordinating with multiple stakeholders to address infrastructure issues can be time consuming. Although CBP established the FAST program to expedite cargo processing for low-risk shippers and uses the program as a tool to help focus its inspections, targeting resources on areas of greatest risk, it lacks the data needed to determine whether the FAST program is effective because it collects incomplete data on FAST shipments. Moreover, data collected by CBP on primary and secondary inspections for a subset of the FAST population do not allow it to determine whether all FAST participants experience reduced wait times to reach primary processing, are less frequently referred to secondary inspections, or receive “front-of-the-line” benefits. The FAST program is intended to provide, among other things, (1) access to dedicated lanes (where available) to increase the speed and efficiency of clearing the border, (2) fewer referrals to secondary inspections for FAST participants, and (3) front-of-the-line processing (i.e., priority in the inspection queue) for CBP inspections. Additional details on the data limitations to assess access to dedicated lanes, fewer referrals to secondary inspections, and front-of-the-line benefits for FAST participants are discussed below. Seven of 122 northern border POEs had dedicated FAST lanes, which accounted for approximately 54 percent of the volume of commercial traffic along the northern border in 2009. See figures 15 and 16 for examples of a dedicated FAST lane at the Pacific Highway crossing in Blaine, Washington, and in Port Huron, Michigan, respectively. However, CBP is unable to monitor wait times for FAST shipments using dedicated lanes to determine if the shipments are experiencing reduced wait times in reaching primary processing because of data limitations and other factors. CBP reported that wait times for FAST lanes at individual ports were shorter than those for non-FAST lanes. However, because dedicated FAST lanes are sometimes used for regular commercial traffic during periods of heavy volume, the data collected at the individual POE level for FAST dedicated lane wait times are less useful for comparison. For example, at the Pacific Highway crossing in Blaine, Washington, CBP officials said that when wait times exceed 1 hour, they open the FAST lane to all commercial traffic. Similarly, at the Port of Detroit, CBP has the ability to adjust the FAST lanes and open them to non-FAST traffic on a temporary basis. Moreover, the CBP officials stated that if the FAST lane is empty, the Port Director has discretion in determining whether to allow non-FAST shipments to use the lane (e.g., livestock shipments or FAST drivers not carrying a FAST load). The data CBP collects that could be used to determine whether FAST participants experience reduced wait times at primary processing and fewer referrals to secondary inspections are limited because CBP does not differentiate between all FAST and non-FAST shipments. DHS noted that dedicated FAST lanes enable greater processing efficiency, thereby reducing queue lengths and wait times. DHS stated that lanes dedicated to FAST have average primary processing times of 30 seconds versus non- FAST lanes at 2 minutes. However, as explained below, these averages account for approximately 38 percent of FAST participants. The ACE system, through which data on commercial shipments are collected by CBP, captures data on the NCAP and PAPS manifest types. The NCAP manifest is available to select FAST shipments mostly related to the auto industry. A majority of FAST shipments are processed under the PAPS manifest type. However, the PAPS manifest is not confined to the FAST program so shipments processed using the PAPS manifest include both FAST and non-FAST shipments. If a FAST shipment is processed using PAPS, the ACE system uses information submitted on the electronic manifest to determine whether the shipment meets all the conditions of the FAST program (e.g., the driver has a FAST card and the carrier and importer are C-TPAT certified). If these conditions are met and the shipment is eligible for FAST, ACE displays a green flag to the officer processing the shipment in the primary lane. Although the purpose of this process is to speed processing for shipments, ACE does not save this information so it cannot be used to assess processing times for all FAST versus non-FAST shipments. The ACE system also uses information in the manifests to help determine the need for secondary screening, but for the same reasons discussed above, the system does not collect information on the number of secondary screenings for all FAST versus non-FAST shipments. Consequently, CBP is unable to determine whether the program provides all participants with the intended benefits of reduced primary processing times and fewer referrals to secondary inspections. CBP acknowledged that the ACE system needs to be modified so that it can monitor and record FAST primary processing times and the number of referrals to secondary inspections more effectively. CBP began to consider enhancing ACE to better differentiate between FAST and non-FAST shipments in 2008 and estimates that the software changes would cost about $122,000. However, senior CBP officials said that the project remains unfunded due to other priorities. While we recognize that CBP has competing priorities and that assessing a program’s impact or benefit is often difficult, determining whether a program achieves its intended results can provide important information about the program’s progress and be used as a basis for determining whether adjustments are needed to ensure its long-term success. Further, a senior CBP official, Chief of Cargo Operations, stated that CBP has not yet established timelines or milestones for completing the ACE enhancement project to capture data for all FAST participants because officials have not identified a source of funding. Standard practices for project management established by the Project Management Institute state that managing a project involves, among other things, developing a timeline with milestone dates to identify points throughout the project to reassess efforts under way to determine whether project changes are necessary. Establishing timelines or milestones for completing the enhancement to ACE could help ensure that CBP’s actions are implemented as planned so that it is better positioned to begin collecting the data necessary to determine whether FAST shipments are receiving the intended benefits of the program—shorter primary processing times and fewer referrals to secondary inspections. Additionally, although CBP stated that once ACE is modified to collect data on all FAST participants, the data may be useful for measuring program benefits, CBP has no plans to conduct a study on whether the benefits are being realized. Our previous work identified program evaluations or similar studies as a way for agencies to explore the benefits of a program as well as ways to improve program performance. Therefore, using this information to conduct a study would enable CBP to determine if the benefits are experienced by all FAST participants and what program adjustments, if any, may be needed. Moreover, CBP does not collect data on whether FAST shipments that are sent to secondary inspections receive priority in the inspections queue, known as front-of-the-line benefits. CBP officials in headquarters said the ACE enhancement project will not allow CBP to capture data on front-of- the-line benefits and there are no current plans to capture these data. CBP officials stated that front-of-the-line benefits may vary based on the infrastructure at the POE, traffic volume, and the type of exam needed (e.g., paperwork issue or physical inspection/unloading). Moreover, according to CBP officials, space constraints in secondary inspection areas limit their ability to provide front-of-the-line benefits to all FAST participants. For example, CBP officials at the Pacific Highway crossing told us that due to space constraints on the plaza, they cannot move FAST shipments to the front of the line for VACIS screenings. However, in some instances, FAST members receive priority processing for paperwork issues, but they have to wait in line for other types of inspections, such as physical inspections or VACIS screening due to infrastructure issues. CBP is working to address challenges it is facing related to infrastructure constraints, and until there are results, CBP will not be able to develop a standard data collection method for front-of-the-line benefits because of the variations in infrastructure across POEs. Collecting data on the FAST program could better position CBP to gauge program effectiveness and determine what program adjustments, if any, are needed. Standards for Internal Control in the Federal Government requires that all transactions be clearly documented in a manner that is complete, accurate, and useful for managers and others involved in evaluating operations. Moreover, internal control standards call for agencies to establish policies and procedures to ensure the validity and reliability of data. We previously reported that leading organizations promote accountability by establishing results-oriented goals and corresponding performance measures by which to gauge progress. Having better information with which to assess program effectiveness would help CBP in making management decisions on the program and would enable CBP management to report to participants and potential future participants whether the benefits of the program are being realized. This information would help participants determine whether to join or remain in the program. CBP and 8 of 11 importers and trade organizations that we interviewed have expressed generally favorable views of the program, but stated that infrastructure challenges may limit the benefits received. According to CBP officials, the FAST program helps the agency meet its goal of securing borders while promoting legitimate trade by pre-vetting drivers and securing the supply chain, which allow CBP to focus its resources on high- risk shipments. For example, CBP officials in Port Huron, Michigan, told us that the FAST program is beneficial to CBP because it facilitates the processing of low-risk shipments, and improves the flow of traffic by reducing congestion on the Blue Water Bridge. CBP officials at the Port of Detroit and Port Huron also noted that FAST participants benefit from the FAST program with faster primary processing and front-of-the-line benefits. Moreover, officials we spoke to representing “The Big 3” automakers—Ford, GM, and Chrysler—are generally satisfied with the FAST program, and noted that FAST is a vital program that decreases border delays while ensuring a more secure supply chain. For example, these officials stated that they receive fewer referrals to secondary inspections, and told us that when their shipments are referred to secondary inspection they generally receive priority processing. Additionally, five trade organizations, such as the Detroit Regional Chamber of Commerce, American Trucking Alliance, and customs brokers, noted that certain intended benefits are met, including fewer inspections. However, these groups raised concerns about infrastructure constraints. CBP officials said that FAST program benefits may be limited due to infrastructure constraints at land POEs. As previously discussed, only 7 of 122 northern border POEs have dedicated FAST lanes. Further, officials from 7 of 10 trade organizations and importers, such as the Canadian Trucking Alliance, the U.S. Chamber of Commerce, the Detroit Regional Chamber, and Customs Brokers and Forwarders, as well as officials from 7 of 13 border stakeholders we spoke with, such as bridge commissions, stated that CBP lacks the infrastructure needed to successfully implement the FAST program. For example, American Trucking Association officials told us that a challenge trucking companies continue to face is the lack of dedicated lanes leading to the POE so that FAST traffic is not comingled with non-FAST traffic. As a result, FAST shipments do not receive priority treatment due to non-FAST and FAST shipments comingling on the bridge as well as in the plaza and infrastructure constraints at the POEs. CBP officials acknowledge that due to infrastructure constraints they are unable to provide dedicated FAST lanes at certain locations, such as the Peace Bridge and Lewiston facilities in Western New York. These constraints also make access to existing FAST booths difficult. As previously discussed, access to the dedicated FAST lanes at the Ambassador Bridge Fort Street Cargo Facility is limited due to the placement of lane 10 and a wall surrounding the inspection plaza, as shown in figure 17. Due to these infrastructure constraints, FAST trucks have to form a single queue to access the four dedicated FAST booths, resulting in reduced throughput and increased delays. Further, the Pacific Highway crossing in Blaine, Washington, has three commercial lanes with one dedicated FAST lane and limited space for expansion due to residential development and the international border. Although the Pacific Highway crossing has a dedicated FAST lane, CBP officials told us that when wait times exceed 1 hour, they open the FAST lane to all commercial traffic to mitigate congestion. As a result, FAST trucks are comingled with non-FAST traffic. CBP officials also stated that due to limited space for queuing in the secondary inspection area, they are unable to provide FAST shipments with priority processing for VACIS screening. Additionally, 10 of the 23 importers, trade organizations, and border stakeholders we interviewed voiced concerns about the FAST program. These concerns included the costs of enrollment as well as FAST program policy issue such as shipment restrictions and the appeals process for security incidents. Officials from the American Trucking Alliance and five other trade organizations, such as the Canadian Trucking Alliance and National Customs Brokers and Forwarders Association, stated that smaller and medium-sized companies may not be able to afford the cost associated with C-TPAT compliance. While the enrollment cost of the FAST program is $50 for the driver card, FAST participants are also required to be certified under C-TPAT. According to CBP, the potential cost of implementing security measures to comply with C-TPAT varies by the size of the company as well as the types of security measures implemented. CBP officials stated that the cost for a company to become C-TPAT certified will vary because the cost of securing the supply chain varies depending on the size of the company or security measures needed. Six importers and trade organizations raised concerns about the restrictions on carriers that are transporting goods from multiple shippers that, in total, are less than the size of one truckload (called less-than-truckload shipments or LTL). CBP officials explained that LTL shipments are allowed to use the FAST lane provided each of the shippers are C-TPAT-certified members and all other FAST requirements are met. CBP stated that this policy ensures that LTL shipments using the FAST lane have completed a strict security review by participating member companies. Further, according to CBP, it needs to maintain a balance between facilitating trade and security. Therefore, CBP restricts LTL shipments from using the FAST lane if all of the shippers are not C-TPAT-certified members because the entire shipment is not pre-vetted and deemed low risk. Four importers and trade organizations noted that CBP immediately suspends a member’s FAST privileges if a driver is involved with a security incident until the results of the investigation are final. CBP officials stated that the agency immediately revokes all program privileges following the security violation rather than after the investigation and imposes program restrictions to secure the supply chain and maintain the integrity of the program. According to CBP, on average, it takes about 15 days to conduct the post incident analysis in coordination with other law enforcement agencies to determine where the breakdown in the supply chain occurred. CBP officials said that if a member is suspended after the investigation, the member may appeal this decision to CBP headquarters. According to CBP, in general, members are provided with the opportunity to prepare a corrective action plan, which is subject to physical confirmation that all identified vulnerabilities have been addressed. For example, in 2009, CBP suspended or removed 82 members, 57 of which were reinstated. However, CBP officials explained that in some instances, a company may be permanently removed from the program for providing false information or for repetitive security violations. Further, CBP officials emphasized that members are informed of the appeals and suspension process, and the information is provided on CBP’s Web site. Canada is the United States’ largest single trading partner, and economists expect trade between the two countries to increase as the economy improves. As such, achieving an effective balance between facilitating legitimate trade and travel and performing the inspections needed to secure the U.S. border is critical to the security and economy of the United States. Further, CBP has taken steps to address some of the infrastructure needs of its aging northern border POEs and recognizes the continued need for improvements to speed the flow of traffic. These improvements are particularly important in light of projections regarding the increase in trade between Canada and the United States. Cooperative U.S.-Canada efforts to increase the flow of legitimate trade and travel and improve border security, such as the FAST program, are promising, and CBP and participants we interviewed generally believe the program is helpful where infrastructure is sufficient. While CBP is taking actions to collect data on the FAST program in the ACE database, CBP has not established milestones to complete the enhancement for FAST data collection. Establishing milestones for completing the enhancement to ACE could help ensure that CBP’s actions are implemented as planned so that it is better positioned to begin collecting the data necessary to determine whether FAST shipments are receiving the intended benefits of the program—shorter primary processing times and fewer referrals to secondary inspections. Moreover, once CBP completes the enhancement to the ACE database, using this information to conduct a study would enable CBP to determine if the benefits are experienced by all FAST participants and what program adjustments, if any, are needed. To enhance DHS’s ability to assess the effectiveness of the FAST program, we recommend that the Commissioner of CBP take the following two actions: Develop and meet milestones for completing the enhancement of the ACE database to capture data on the intended benefits of the FAST program. Once the database is modified, use the data collected to conduct a study to determine whether the FAST program is achieving its intended benefits. We provided a draft of this report to DHS, Commerce, DOT, GSA, and HHS for review and comment. DHS provided written comments on July 9, 2010, which are reprinted in appendix I. In commenting on the draft report, DHS stated that it agreed with the two recommendations in this draft and identified corrective actions it has planned or under way to address them. DHS’s comments also raised three issues regarding our findings. First, DHS stated that its current approach to measuring wait times shows that those drivers using FAST lanes experience shorter wait times. While average wait times for FAST lanes may be shorter than average wait times for regular commercial lanes, as indicated in the report, we found that the wait times reported for FAST lanes do not necessarily reflect participants’ wait times as dedicated lanes may be used by FAST and non-FAST participants. Moreover, we reported that because CBP’s wait times are estimated using approximations of varying reliability at selected POEs, the data cannot be used for analyses across ports, and the methods of collection raise questions about the reliability of the overall data. Second, DHS commented that the discrepancies in wait times reported between CBP, trade organizations, and importers may be attributed to the different measures and definitions used to estimate wait times. We acknowledge there could be a variety of reasons for the discrepancies in wait times reported by CBP, trade organizations, and importers. However, CBP’s observation further supports our conclusion that using a consistent methodology, standard formula, and automation could increase the accuracy and reliability of the wait times data collected. Third, DHS stated that CBP primary officers at the primary inspection point can only add the driver and trailer information to a FAST/NCAP manifest, and not the quantity of shipment. We revised the draft report to reflect this information. We received written comments from Commerce on July 2, 2010, in which it concurred with our report. These comments are reprinted in appendix II. DHS and DOT also provided technical comments, which we incorporated in the report as appropriate. In addition, we received e-mails from the GSA liaison on June 2, 2010, and the HHS liaison on June 29, 2010, in which they notified us that they had no comments on the draft report. We are sending copies of this report to the Secretaries of Commerce, Health and Human Services, Homeland Security, and Transportation; the Administrator of GSA; and interested congressional committees as appropriate. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or stanar@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. In addition to the contact named above, Susan Quinlan, Assistant Director, and Minty M. Abraham, Analyst-in-Charge, managed this assignment. David P. Alexander, Avrum I. Ashery, Chuck Bausell, Frances Cook, Peter DelToro, Lara Kaskie, Alana R. Miller, Madhav S. Panwar, and Mark Ramage made significant contributions to the report.
The United States and Canada share a border of nearly 5,525 miles. U.S. Customs and Border Protection (CBP), within the Department of Homeland Security (DHS), is responsible for securing the borders while facilitating trade and travel. CBP launched the Free and Secure Trade (FAST) program in 2002 to expedite processing for pre-vetted, low-risk shipments. GAO was requested to assess U.S.-Canadian border delays. This report addresses the following for U.S. northern border land ports of entry: (1) the extent to which wait times data are reliable and reported trends in wait times, (2) any actions CBP has taken to reduce wait times and any challenges that remain, and (3) the extent to which CBP and FAST participants experience the benefits of the FAST program. GAO analyzed CBP information and data on staffing, infrastructure, wait times, training, and the FAST program from 2003 through 2009 to analyze operations. GAO visited six northern border land ports, which were primarily selected based on commercial traffic volume. GAO interviewed importers, trade organizations, and border stakeholders. The results are not generalizable, but provide insights. CBP does not collect data that would allow it to assess the effect of staffing and infrastructure constraints on wait times, but CBP officials and stakeholders report that wait times have decreased. CBP calculates and reports wait times hourly for 28 of 122 northern border land ports. However, CBP officials and the 13 border stakeholders, importers, and trade organizations GAO interviewed about wait times questioned the accuracy and reliability of CBP's wait times data. For example, CBP officers at three crossings questioned the methods used to estimate wait times, such as driver surveys, which are subjective. According to CBP and all stakeholders GAO interviewed, wait times for commercial vehicles have generally decreased due to lower traffic volumes as a result of the recession as well as staffing and infrastructure improvements, among other things. CBP initiated a pilot project in 2009 to automate wait times measurement and improve the accuracy of the data, and plans to deploy initial technology in the summer of 2010. To reduce wait times, CBP has taken actions to address staffing constraints and make infrastructure improvements, but challenges remain. CBP has increased northern border staffing levels by 47 percent from fiscal years 2003 through 2010, and thus is better able to staff all available lanes. GAO found that CBP officers receive 3 to 14 weeks of on-the-job training rather than the required 12 to 14 weeks. CBP launched an enhanced tracking system in April 2010 to monitor training, which officials said will enable them to work with field offices that are not providing required training. CBP has a process for identifying and prioritizing capital infrastructure needs at land ports and has infrastructure projects related to 35 of the 122 northern border ports under way or planned over the next 5 years, in part, to help reduce wait times. CBP has made infrastructure improvements at 5 of the 6 land ports GAO visited. CBP officials said they face challenges addressing infrastructure needs, such as expanding infrastructure at the Peace Bridge, which is confined on three sides by the Niagara River, a historic park, and a residential neighborhood. CBP lacks data needed to assess whether FAST program participants receive program benefits, but depending on the infrastructure available, CBP and 8 of 11 stakeholders GAO interviewed had generally favorable views of the program. CBP's Automated Commercial Environment (ACE) collects data on freight processing but does not differentiate between FAST and non-FAST shipments. Thus, it is difficult for CBP to determine the extent to which participants experience intended benefits. CBP officials stated that the ACE system needs to be modified to capture these data, but CBP has not yet set milestones to do so. Establishing milestones could help CBP ensure that modifications to ACE proceed as planned so that CBP is better positioned to begin collecting data. However, CBP does not have plans to conduct a study to determine if program benefits are being realized once these data have been captured. Conducting such a study would help CBP determine if the benefits are experienced by all FAST participants, and what program adjustments, if any, are needed. GAO recommends that CBP (1) develop milestones for completing the enhancement of the database to capture data on FAST program benefits and (2) conduct a study to determine if program benefits are being realized. DHS concurred.
U.S. insular areas receive hundreds of millions of dollars in federal grants from a variety of federal agencies, including the Departments of Agriculture, Education, Health and Human Services, Homeland Security, the Interior, Labor, and Transportation. The Secretary of the Interior has administrative responsibility over the insular areas for all matters that do not fall within the program responsibility of another federal department or agency. OIA, established in 1995, is responsible for carrying out the Secretary’s responsibilities for U.S. insular areas. OIA’s mission is to promote the self-sufficiency of the insular areas by providing financial and technical assistance, encouraging private sector economic development, promoting sound financial management practices in the insular governments, and increasing federal responsiveness to the unique needs of the island communities. Much of the assistance that OIA administers to insular areas is in the form of what it considers mandatory assistance, including compact assistance, permanent payments to U.S. territories, American Samoa operations funding, and capital improvement project grants. OIA also administers discretionary assistance through, for example, technical assistance grants and operations and maintenance improvement program grants. The administration and management of OIA grants is guided by OIA’s Financial Assistance Manual. OIA grants other than compact assistance are subject to Interior’s Grants Management Common Rule, relevant Office of Management and Budget (OMB) circulars, and specific terms and conditions that OIA outlines in each grant agreement, such as semiannual narrative and financial reporting and grant expiration dates. Within OIA, two divisions are largely responsible for grant administration and management—the Technical Assistance Division and the Budget and Grants Management Division. The Technical Assistance Division, which administers general technical assistance grants in addition to several other types of technical assistance, has a director and two grant managers. The Budget and Grants Management Division, which covers capital improvement project and operations and maintenance improvement program grants, has a director and three grant managers. A third OIA division—the Policy and Liaison Division—also provides some staff for grant-related tasks, including staff that focus on OIA’s accountability and audit responsibilities. The majority of OIA’s budget is directed to compact assistance and permanent fiscal payments (see table 1). About 2 percent of OIA’s budget is dedicated to administrative costs, leaving less than 16 percent for noncompact grants and technical assistance. Among the random sample of 173 OIA grant project files that we reviewed in our March 2010 report, we identified 49 OIA technical assistance grant projects from a variety of technical assistance grant types (see table 2). The 49 technical assistance grant projects that we reviewed in our March 2010 report, were geographically dispersed among the insular areas and the State of Hawaii (see table 3). On the basis of our review of grant files from a random sample of grant projects, we determined that the long-standing internal control weaknesses that we, Interior’s Office of Inspector General, and others, identified between 2000 and 2009 still exist. We estimated that 39 percent of the 1,771 grant projects in OIA’s grant management database demonstrate at least one internal control weakness that may increase the projects’ susceptibility to mismanagement. Of the 49 technical assistance grant projects in our sample, 47 grant projects demonstrated one or more of the internal control weaknesses that we assessed, which is more than double our estimated 39 percent occurrence rate for OIA grants as a whole. The eight internal control weaknesses that we assessed can be grouped into three categories based on the entity responsible for the activities: grant recipient activities, OIA grant management activities, or joint activities between grant recipients and OIA. Table 4 shows (1) how often we estimated each internal control weakness would occur within the universe of OIA grants based on our random sample, and (2) specific data on the 49 technical assistance grants included in our sample. The most prevalent weaknesses for the 49 technical assistance grant projects were insufficient reporting and record-keeping discrepancies. Table 5 shows how many internal control weaknesses were demonstrated by the 49 technical assistance grant projects in our sample. For example, one general technical assistance grant project that we reviewed in detail—the 2005 grant for the USVI Household Income and Expenditures Survey (HIES) project—had 4 out of 5 applicable internal control weaknesses. In 2005, OIA awarded a general technical assistance grant to the Eastern Caribbean Center (ECC) at the University of the Virgin Islands for the purpose of collecting data to update important economic and demographic indicators for the territory. Because of funding constraints, OIA was not able to award the entire amount requested at that time. In addition, OIA later reduced its financial support of the project after data collection was underway, thereby reducing the scope of data collection efforts. The Director of the ECC reported that OIA’s decision to reduce available funding after data collection had begun was disastrous to the statistical integrity of the survey. In reviewing this grant project, we found the following four internal control weaknesses, (1) failure to submit the required status report in full and on time, (2) failure to submit the required final reports on time, (3) expected or actual completion dates that occurred after grant expiration, and (4) information in OIA’s grant management database that did not match information in the grant file. These weaknesses and other problems affected project completion in several ways, including the loss of additional funding that OIA later awarded. In 2007, OIA granted additional funds for the HIES project to complete tabulation of the data that had been collected. However, because so much time had passed since the initial data collection effort, the Director of the ECC stated that it was not possible to complete the data collection as originally planned. Due to the lack of activity with the grant and the fact that no narrative status reports were submitted, OIA deobligated these additional grant funds in their entirety in February 2009. The final HIES report also was not completed until September 2009, more than 4 years after the initial grant was awarded. OIA has taken several important steps to improve grant project implementation and management but faces several obstacles in its efforts to compel insular areas to complete their projects in a timely and effective manner. Over the past 5 years, OIA has taken the following steps to improve grant project implementation and management: Competitive allocation system. In fiscal year 2005, OIA implemented a new competitive allocation system for the $27.7 million in capital improvement project grants that it administers to the insular areas. This system provides incentives for financial management improvements and project completion by tying a portion of each insular area’s annual allocation to the insular governments’ efforts in these areas—such as their efforts to submit financial and status reports on time. Through this system, OIA scores each insular area against a set of performance-based criteria and increases allocations to those insular areas with higher scores, thereby lowering allocations to insular areas with lower scores. Grant expiration dates. Beginning in 2005, to encourage expeditious use of funds, OIA established 5-year expiration dates in the terms and conditions of new capital improvement project grants. Beginning in 2008, OIA also notified insular area officials of expiration dates for grant projects that had been ongoing for more than 5 years with no or limited progress. OIA officials explained that while the expiration dates have not yet pushed all of the insular areas to complete projects, they have encouraged some areas to do so. The officials also stated that the expiration dates have helped OIA grant managers administer and manage grants—which they believe has improved accountability—and have been useful for insular area grantees whose agencies have high staff turnover and were unaware of the status of older grants. Technical assistance projects have shorter grant terms than capital improvement projects, with expiration dates within 1 to 2 years; we found that OIA extended the grant expiration date at least once for 18 of the 49 technical assistance grant projects in our sample. Actions to improve insular area grant management continuity. OIA has also taken steps to help with the continuity of grant administration at the insular level. For example, in March 2008, OIA awarded a $770,000 grant for capital improvement project administration in the CNMI, which provided funding for positions in the local central grant management office in that insular area. According to the grant manager for CNMI capital improvement projects, the grant was given to help ensure that the central grant management office had the staff necessary to help move implementation of projects forward. Despite OIA’s efforts, some insular areas are still not completing their projects in a timely and effective manner, and OIA faces the following key obstacles in compelling them to do so: Lack of sanctions for delayed or inefficient projects. Current OIA grant procedures provide few sanctions for delayed or inefficient projects. For example, although OIA established grant expiration dates, they have little practical effect. In theory, a grant expiration date encourages timely completion of a project because if a project is not completed on time, the funds are taken away from the recipient. However, if an insular area’s OIA grant funds expire, while the funds do not remain immediately available for the project, the insular area does not lose the funds because OIA treats its capital improvement project grants as mandatory funding with “no-year funds,” based on the agency’s interpretation of relevant laws. Thus, after a grant expires, OIA deobligates the funds and they are returned to the insular area’s capital improvement project account to be reobligated for the same or other projects. Recently, OIA has taken steps to identify possible solutions and actions that could help provide effective sanctions for insular areas that do not efficiently complete projects and expend funds. In doing so, OIA has faced uncertainty regarding the authorities it has to change its current policies and practices, which are guided by many special agreements, laws, and regulations. OIA resource constraints. OIA officials report that resource constraints impede effective project completion and proactive monitoring and oversight. Although they could not provide us with data, numerous officials in OIA asserted that heavy workloads are a key challenge in managing grants. The effects of insufficient resources vary across grant type but include impacts on the ability to maintain files, adopt a proactive oversight approach that could aid project completion, conduct more detailed financial reviews of projects, and conduct site visits to more projects to better ensure that mismanagement is detected. Importantly, although grant managers for capital improvement projects noted that the most effective action they can take to move projects along is to conduct site visits, they also asserted that their current workloads only afford one visit per year. Despite their concurrence that additional resources are needed, OIA division directors confirmed that they have not formally communicated these needs to decision makers, or higher levels within Interior, and have not developed a workforce plan or other formal process that demonstrates a need for additional resources. Moreover, OIA does not track workload measures, such as the number of grants handled by each grant manager, to show changes over time that would help justify the need for additional resources. Inconsistent and insufficiently documented project redirection policies. OIA’s current project redirection approval practices do little to discourage insular areas from redirecting project funds in ways that hinder project completion. We found that insular areas shift priorities and frequently redirect grant project funds, which in some cases expedites project completion and in other cases impedes it. Currently, OIA’s policies for granting project redirection requests vary across insular areas. Specifically, in American Samoa, project redirection is limited to changes within a priority category because the insular area’s grants are issued by priority areas. In contrast, the other insular areas each receive grants as one capital improvement grant and are able to redirect money between projects with widely different purposes. Furthermore, OIA’s policies for granting project redirection requests are also not well-documented. Project redirection is a particular concern in instances where a project starts and federal money is expended but the project is never completed, leading to the waste of both federal resources and the local governments’ limited technical capacity to implement projects. Inefficient grant management system. OIA’s current data system for tracking grants is limited in the data elements it contains, leading to inconsistencies in the data that some grant managers rely on for monitoring and oversight activities. Grant managers vary in the degree to which they rely upon OIA’s database, as well as the priority they place on keeping information in the database up to date. While grant managers for all grant types reported relying on the database for information on the amount of funds drawn down from grants and for responding to requests for data from outside parties (such as Interior’s Office of Inspector General and GAO), some told us that they do not find OIA’s database useful and therefore maintain their own separate spreadsheets to track some information, including expiration dates, grant status, and receipt dates for the most recent financial and narrative reports. As reported in the Domestic Working Group’s Guide to Opportunities for Improving Grant Accountability, consolidating information systems can enable agencies to better manage grants. Along these lines, Interior is currently phasing in a centralized agencywide system—the Financial and Business Management System—that is scheduled to be implemented in OIA in 2011. Our March 2010 report contained three recommendations to the Secretary of the Interior designed to improve the department’s management and oversight of grants to the insular areas, including one that would directly impact OIA’s technical assistance grant programs. Specifically, we recommended that the Secretary of the Interior direct OIA to create a workforce plan and reflect in its plan the staffing levels necessary to adopt a proactive monitoring and oversight approach. Such proactive monitoring and oversight would apply to all of OIA’s grant programs, including the technical assistance programs. OIA agreed with our report and told us that it will implement these recommendations. In conclusion, Madam Chairwoman, OIA has made important strides in implementing grant reforms, particularly in its efforts to establish disincentives for insular areas that do not complete grant projects in a timely and effective manner. However, the unique characteristics and situations facing insular area governments, and the need to mindfully balance respect for insular governments’ self-governance and political processes with the desire to promote efficiency in grant project implementation, limit as a practical matter some of the actions that OIA can take to improve the implementation of grant projects. Nonetheless, OIA has not exhausted all of its available opportunities to better oversee grants and reduce the potential for mismanagement and we will continue to monitor its implementation of our recommendations. Madam Chairwoman, this concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact Anu K. Mittal at (202) 512-3841 or mittala@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Jeffery D. Malcolm and Emil Friberg, Assistant Directors; Elizabeth Beardsley; Keesha Egebrecht; and Isabella Johnson. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
U.S. insular areas face serious economic and fiscal challenges and rely on federal funding to support their governments and deliver critical services. The Department of the Interior, through its Office of Insular Affairs (OIA), provides about $70 million in grants annually, including technical assistance grants, to increase insular area self-sufficiency. In the past, GAO and others have raised concerns regarding insular areas' internal control weaknesses, which increase the risk of grant fund mismanagement, fraud, and abuse. In March 2010, GAO reported on insular area grants (GAO-10-347); this testimony summarizes that report and focuses on (1) whether previously reported internal control weaknesses have been addressed and, if not, to what extent they are prevalent among OIA grant projects, including technical assistance grant projects, as of March 2010; and (2) the extent to which OIA has taken action to improve the implementation and management grant projects, as of March 2010. For the March 2010 report, GAO reviewed a random sample of 173 OIA grant project files and interviewed OIA and insular area officials. For this testimony, GAO conducted additional analysis for the 49 technical assistance grant projects included in the sample. GAO's March 2010 report contained three recommendations. Interior agreed with the recommendations. This testimony statement contains no new recommendations. Internal control weaknesses previously reported by GAO and others continue to exist, and about 40 percent of grant projects funded through OIA have these weaknesses, which may increase their susceptibility to mismanagement. These weaknesses can be categorized into three types of activities: grant recipient activities, joint activity between grant recipients and OIA, and OIA's grant management activities. For the 49 technical assistance grant projects in GAO's sample, the most prevalent weaknesses were insufficient reporting and record-keeping discrepancies. Over the past 5 years, OIA has taken steps to improve project implementation and management. Most notably, OIA established incentives for financial management improvements and project completion by tying a portion of each insular area's annual allocation to the insular governments' efforts in these areas--such as their efforts to submit financial and status reports on time. In addition, OIA established expiration dates for grants to encourage expeditious use of the funds. Despite these and other efforts, some insular areas are still not completing their projects in a timely and effective manner, and OIA faces key obstacles in compelling them to do so. Specifically, (1) current OIA grant procedures provide few sanctions for delayed or inefficient projects, and the office is not clear on its authorities to modify its policies; (2) resource constraints impede effective project completion and proactive monitoring and oversight; (3) inconsistent and insufficiently documented project redirection policies do little to discourage insular areas from redirecting grant funds in ways that hinder project completion; and (4) OIA's current data system for tracking grants is limited and lacks specific features that could allow for more efficient grant management.
In fiscal year 1986, Congress directed DOD to destroy the U.S. stockpile of lethal chemical agents and munitions. DOD designated the Department of the Army as its executive agent for the program, and the Army established the Chemical Demilitarization (or Chem-Demil) Program, which was charged with the destruction of the stockpile at nine storage sites. Incineration was selected as the method to destroy the stockpile. In 1988, the Chemical Stockpile Emergency Preparedness Program (CSEPP) was created to enhance the emergency management and response capabilities of communities near the storage sites in case of an accident; the Army and the Federal Emergency Management Agency (FEMA) jointly managed the program. In 1997, consistent with congressional direction, the Army and FEMA clarified their CSEPP roles by implementing a management structure under which FEMA assumed responsibility for off-post (civilian community) program activities, while the Army continued to manage on-post chemical emergency preparedness. The Army provides CSEPP funding to FEMA, which is administered via grants to the states and counties near where stockpile sites are located in order to carry out the program’s off-post activities. Agent destruction began in 1990 at Johnston Atoll in the Pacific Ocean. Subsequently, Congress directed DOD to evaluate the possibility of using alternative technologies to incineration. In 1994, the Army initiated a project to develop nonincineration technologies for use at the two bulk-agent only sites at Aberdeen, Maryland, and Newport, Indiana. These sites were selected in part because their stockpiles were relatively simple—each site had only one type of agent and this agent was stored in bulk-agent (ton) containers. In 1997, DOD approved pilot testing of a neutralization technology at these two sites. Also in 1997, Congress directed DOD to evaluate the use of alternative technologies and suspended incineration planning activities at two sites with assembled weapons in Pueblo, Colorado, and Blue Grass, Kentucky. Furthermore, Congress directed that these two sites be managed in a program independent of the Army’s Chem-Demil Program and report to DOD instead of the Army. Thus, the Assembled Chemical Weapons Assessment (ACWA) program was established. The nine sites, the types of agent, and the percentage of the original stockpiles are shown in table 1. In 1997, the United States ratified the CWC, which prohibits the use of these weapons and mandates the elimination of existing stockpiles by April 29, 2007. A CWC provision allows that extensions of up to 5 years can be granted. The CWC also contains a series of interim deadlines applicable to the U.S. stockpile (see table 2). The United States met the 1 percent interim deadline in September 1997 and the 20 percent interim deadline in July 2001. As of June 2003, the Army was reporting that a total of about 26 percent of the original stockpile had been destroyed. Three other countries (referred to as states parties)—India, Russia, and one other country—have declared chemical weapons stockpiles and are required to destroy them in accordance with CWC deadlines as well. As of April 2003, two of these three countries (India and one other country) had met the 1 percent interim deadline to destroy their stockpiles. Of the three countries, only India met the second (20 percent) interim deadline to destroy its stockpile by April 2002. However, Russia, with the largest declared stockpile—over 40,000 tons— did not meet the 1 percent or the 20 percent interim deadlines, and only began destroying its stockpile in December 2002. In 2001, Russia requested a 5-year extension to the 2007 deadline. Russia did destroy 1 percent of its stockpile by April 2003, although it is doubtful that it will meet the 2012 deadline if granted. Traditionally, management and oversight responsibilities for the Chem-Demil Program reside primarily within three levels at DOD—the Under Secretary of Defense (Acquisition, Technology, and Logistics) who is the Defense Acquisition Executive for the Secretary of Defense, the Assistant Secretary of the Army (Acquisition, Logistics, Technology) who is the Army Acquisition Executive for the Army, and the Program Manager for Chemical Demilitarization—because it is a major defense acquisition program. In addition to these offices, since August 2002, the Deputy Assistant to the Secretary of Defense (Chemical Demilitarization and Threat Reduction), has served as the focal point responsible for oversight, coordination, and integration of the Chem-Demil Program. In May 2001, in response to program cost, schedule, and management concerns, milestone decision authority was elevated to the Under Secretary of Defense (Acquisition, Technology, and Logistics). DOD stated that this change would streamline future decision making and increase program oversight. DOD indicated that the change was also consistent with the size and scope of the program, international treaty obligations, and the level of local, state, and federal interest in the safe and timely destruction of the chemical stockpile. In September 2001, after more than a yearlong review, DOD revised the program’s schedule milestones for seven of the nine sites and the cost estimates for all nine sites. These milestones represent the target dates that each site is supposed to meet for the completion of critical phases of the project. The phases include design, construction, systemization, operations, and closure. (Appendix II describes these phases and provides the status of each site.) The 2001 revision marked the third time the program extended its schedule milestones and cost estimates since it became a major defense acquisition program in 1994. The 2001 revision also pushed the milestones for most sites several years beyond the previous 1998 schedule milestones and, for the first time, beyond the April 2007 deadline contained in the CWC. Table 3 compares the 1998 and 2001 schedule milestones for starting and finishing agent destruction operations at the eight sites with chemical agent stockpiles in 2001. The planned agent destruction completion date at some sites was extended over 5 years. DOD extended the schedule milestones to reflect the Army’s experience at the two sites—Johnston Atoll and Tooele—that had begun the destruction process prior to 2001. It found that previous schedule milestones had been largely based on overly optimistic engineering estimates. Lower destruction rates stipulated by environmental regulators, and increased time needed to change the facility’s configuration when switching between different types of chemical agents and weapons, meant destruction estimates needed to be lengthened. Moreover, experience at Johnston Atoll, which began closure activities in 2000, revealed that previous closure estimates for other sites had been understated. In addition, DOD’s Cost Analysis Improvement Group modified the site schedules based on a modeling technique that considered the probabilities of certain schedule activities taking longer than anticipated. In particular, the group determined that the operations phase, where agent destruction takes place, has the highest probability for schedule delays and lengthened that phase the most. Because the costs of the program are directly related to the length of the schedule, DOD also increased the projected life-cycle costs, from $15 billion in 1998 to $24 billion in 2001 (see fig. 1). In December 2001, after the program schedule and costs were revised, the Army transferred primary program oversight from the Office of the Assistant Secretary of the Army (Acquisition, Logistics, and Technology) to the Office of the Assistant Secretary of the Army (Installations and Environment). According to the Army, this move streamlined responsibilities for the program, which were previously divided between these two offices. In January 2003, the Army reassigned oversight responsibilities to the Assistant Secretary of the Army (Acquisition, Logistics, and Technology) for all policy and direction for the Chem-Demil Program and CSEPP. The Secretary of the Army also directed the Assistant Secretary of the Army (Acquisition, Logistics, and Technology) and the Commanding General, U.S. Army Materiel Command, to jointly establish an agency to perform the chemical demilitarization as well as the chemical weapons storage functions. In response to this directive, the Army announced the creation of a new organization—the Chemical Materials Agency (CMA)—which will merge the demilitarization and the storage functions. During this transition process, the Program Manager for Chemical Demilitarization was redesignated as the Program Manager for the Elimination of Chemical Weapons and will report to the Director of CMA and have responsibility for each site through the systemization phase. The Director for Operations will manage the operations and closure phases. As of June 2003, the Program Manager for the Elimination of Chemical Weapons was providing day-to-day management for the sites at Anniston, Umatilla, Newport, and Pine Bluff; the Director for Operations was providing day-to-day management for the sites at Tooele, Aberdeen, and Johnston Atoll, and the Program Manager, ACWA, was managing the sites at Pueblo and Blue Grass. Since 1990, we have issued a number of reports that have focused on management, cost, and schedule issues related to the Chem-Demil Program. For example, in a 1995 testimony we cited the possibility of further cost growth and schedule slippage due to environmental requirements, public opposition to the baseline incineration process, and lower than expected disposal rates. We also testified that weaknesses in financial management and internal control systems have hampered program results and alternative technologies were unlikely to mature enough to meet CWC deadlines. In 1995, we noted that the emergency preparedness program had been slow to achieve results and that communities were not fully prepared to respond to a chemical emergency. In 1997, we found high-level management attention was needed at the Army and FEMA to clearly define management roles and responsibilities. In 2001, we found that the Army and FEMA needed a more proactive approach to improve working relations with CSEPP states and local communities and to assist them in preparing budgets and complying with program performance measures. In 2000, we found that the Chem-Demil Program was hindered by its complex management structure and ineffective coordination between program offices. We recommended that the Secretary of Defense direct the Secretary of the Army to clarify the management roles and responsibilities of program participants, assign accountability for achieving program goals and results, and establish procedures to improve coordination among the program’s various elements and with state and local officials. A detailed list of these reports and other products is included in Related GAO Products at the end of this report. Despite recent efforts to improve the management and streamline the organization of the Chem-Demil Program, the program continues to falter because several long-standing leadership, organizational, and strategic planning weaknesses remain unresolved. The absence of sustained leadership confuses decision-making authority and obscures accountability. In addition, the Army’s recent reorganization of the program has not reduced its complex organization nor clarified the roles and responsibilities of various entities. For example, CMA reports to two different offices with responsibilities for different phases of the program and left the management of CSEPP divided between the Army and FEMA. The ACWA program continues to be managed outside of the Army as directed by Congress. Finally, the lack of an overarching, comprehensive strategy has left the Chem-Demil Program without a top-level road map to guide and monitor the program’s activities. The absence of effective leadership, streamlined organization, and important management tools, such as strategic planning, creates a barrier to the program accomplishing the safe destruction of the chemical stockpile and staying within schedule milestones, thereby raising program costs. The Chem-Demil Program has experienced frequent shifts in leadership providing oversight, both between DOD and the Army and within the Army, and frequent turnover in key program positions. These shifts have led to confusion among participants and stakeholders about the program’s decision making and have obscured accountability. For example, program officials were not consistent in following through on promised initiatives and some initiatives were begun but not completed. Also, when leadership responsibilities changed, new initiatives were often introduced and old initiatives were abandoned, obscuring accountability for program actions. The program has lacked sustained leadership above the program level as demonstrated by the multiple shifts between DOD and the Army for providing oversight that affects consistent decision making. The leadership responsible for oversight has shifted between the Army and DOD three times during the past two decades, with the most recent change occurring in 2001. Table 4 summarizes these changes. As different offices took over major decision authority, program emphasis frequently shifted, leaving initiatives pursued but not completed, consistency of initiatives was not maintained, and responsibility for decisions shifted. For example, we reported in August 2001 that the Army and FEMA had addressed some management problems in how they coordinated emergency preparedness activities after they had established a memorandum of understanding to clarify roles and responsibilities related to CSEPP. However, according to FEMA officials, DOD did not follow the protocols for coordination as agreed upon with the Army when making decisions about emergency preparedness late in 2001. This led to emergency preparedness items being funded without adequate plans for distribution, which delayed the process. These changes in oversight responsibilities also left the stakeholders in the states and local communities uncertain as to the credibility of federal officials. Leadership responsibilities for the program within the Army have also transferred three times from one assistant secretary to another (see table 5). During this time, there were numerous CSEPP issues that the Army took positions on with which FEMA did not concur. For example, in August 2002, the Assistant Secretary of the Army (Installations and Environment) officials committed to funding nearly $1 million to study building an emergency operations center for a community near Umatilla with additional funds to be provided later. Since the program shifted to the Assistant Secretary of the Army (Acquisition, Logistics, and Technology) in 2003, program officials have been reconsidering this commitment. The problem of Army and FEMA not speaking with one voice led to confusion among state and local communities. Further, dual or overlapping authority by the Assistant Secretary of the Army (Acquisition, Logistics, and Technology) and the Assistant Secretary of the Army (Installations and Environment) in 2001 was not clarified. Without clear lines of authority, one office took initiatives without consulting the other. As a result, stakeholders were unclear if initiatives were valid. In addition to these program shifts, the Deputy Assistant Secretary of the Army (Chemical Demilitarization)—an oversight office moved from DOD to the Army in 1998—reported to the Assistant Secretary of the Army (Acquisition, Logistics, and Technology) from 1998 until 2001, then to the Assistant Secretary of the Army (Installations and Environment) until 2003, and now again to the Assistant Secretary of the Army (Acquisition, Logistics, and Technology). These many shifts in this oversight office with responsibility for programmatic decisions left stakeholders confused about this office’s oversight role and about the necessity of funding requests it made. As a result, the accumulation of extra funding ultimately caused Congress to cut the program’s budget. The Chem-Demil Program has experienced a number of changes and vacancies in key program leadership positions, which has obscured accountability. This issue is further compounded, as discussed later, by the lack of a strategic plan to provide an agreed upon road map for officials to follow. Within the Army, three different officials have held senior leadership positions since December 2001. In addition, five officials have served as the Deputy Assistant Secretary of the Army (Chem-Demil) during that time. The program manager’s position remained vacant for nearly 1 year, from April 2002 to February 2003, before being filled. However, in June, after only 4 months, the program manager resigned and the Army named a replacement. Frequent shifts in key leadership positions led to several instances where this lack of continuity affected decision making and obscured accountability. For example, in June 2002, a program official promised to support future funding requests for emergency preparedness equipment from a community, but his successor did not fulfill this promise. This promise caused communities to submit several funding requests that were not supported. The lack of leadership continuity makes it unclear who is accountable when commitments are made but not implemented. Moreover, when key leaders do not remain in their positions long enough to develop the needed long-term perspective (on program issues) or to effectively follow through on program initiatives, it is easy for them to deny responsibility for previous decisions and avoid current accountability. The recent reorganization by the Army has not streamlined the program’s complex organization or clarified roles and responsibilities. For example, the Director of CMA will now report to two different senior Army organizations, which is one more than under the previous structure. This divided reporting approach is still not fully developed, but it may adversely affect program coordination and accountability. The reorganization has also divided the responsibility for various program phases between two offices within CMA. One organization, the Program Manager for the Elimination of Chemical Weapons, will manage the first three phases for each site and a newly created organization, the Director of Operations, will manage the final two phases. This reorganization changes the cradle-to-grave management approach that was used to manage sites in the past and has blurred responsibities for officials who previously provided support in areas such as quality assurance and safety. Moreover, the reorganization did not address two program components—community- related CSEPP and ACWA. CSEPP will continue to be jointly managed with FEMA. ACWA, as congressionally directed, will continue to be managed separately from the Army by DOD. During the transition process, no implementation plan was promulgated when the new organization was first announced in January 2003. As of June 2003, the migration of roles and responsibilities formerly assigned to the office of the Program Manager for Chemical Demilitarization into the new CMA had not been articulated. For example, several key CMA officials who had formerly been part of the former program office told us that they were unsure of their new roles within CMA and the status of ongoing program initiatives. Furthermore, past relationships and responsibilities among former program offices and site activities have been disrupted. Although the establishment of CMA with a new directorate responsible for operations at Tooele and Aberdeen is underway, former program office staff told us they did not know how this new organization would manage the sites in the future. While DOD and the Army have issued numerous policies and guidance documents for the Chem-Demil Program, they have not developed an overarching, comprehensive strategy or an implementation plan to guide the program and monitor its progress. Leading organizations embrace principles for effectively implementing and managing programs. Some key aspects of this approach include promulgating a comprehensive strategy to include mission, long-term goals, and methods to accomplish these goals and an implementation plan that includes annual performance goals, measurable performance indicators, and evaluation and corrective action plans. According to DOD and Army officials, the Chem-Demil Program relies primarily on guidance and planning documents related to the acquisition process. For example, the former program manager drafted several documents, such as the Program Manager for Chemical Demilitarization’s Management Plan and Acquisition Strategy for the Chemical Demilitarization Program, as the cornerstone of his management approach. Our review of these and other key documents showed that they did not encompass all components of the program or other nonacquisition activities. Some documents had various elements, such as a mission statement, but they were not consistently written. None contained all of the essential elements expected in a comprehensive strategy nor contained aspects needed for an implementation plan, such as an evaluation and corrective action plan. Further, all documents were out of date and did not reflect recent changes to the program. DOD and Army officials stated that the program’s strategy would be articulated in the updated program’s acquisition strategy to be completed by the new Director of CMA. According to the draft acquisition strategy, the focus is to acquire services, systems, and equipment. Again, this approach does not address all components of the Chem-Demil Program, such as CSEPP and ACWA. More importantly, a strategic plan would ensure that all actions support overall program goals as developed by the appropriate senior-level office with oversight responsibility for the program. An implementation plan would define the steps the program would take to accomplish its mission. Further, a strategy document, coupled with an implementation plan, would clarify roles and responsibilities and establish program performance measurements. Together, these documents would provide the foundation for a well-managed program to provide continuity of operations for program officials to follow. The program continues to miss most milestones, following a decade long trend. Nearly all of the incineration sites will miss the 2001 scheduled milestones because of substantial delays during their systematization (equipment testing) or operations (agent destruction) phases. Delays at sites using incineration stem primarily from a number of problems that DOD and the Army have not been able to anticipate or control, such as concerns involving plant safety, difficulties in meeting environmental permitting requirements, public concerns about emergency preparedness plans, and budgeting shortfalls. The neutralization sites have not missed milestones yet but have experienced delays as well. DOD and the Army have not developed an approach to anticipate and address potential problems that could adversely affect program schedules, costs, and safety. Neither DOD nor the Army has adopted a comprehensive risk management approach to mitigate potential problems. As a result, the Chem-Demil Program will have a higher level of risk of missing its schedule milestones and CWC deadlines, incurring rising costs, and unnecessarily prolonging the potential risk to the public associated with the storage of the chemical stockpile. Most incineration sites will miss important milestones established in 2001 due to schedule delays. For example, delays at Anniston, Umatilla, and Pine Bluff have already resulted, or will result, in their missing the 2001 schedule milestones to begin chemical agent destruction operations (operation phase). Johnston Atoll will miss its schedule milestone for shutting down the facility (closure phase). The Tooele site has not missed any milestones since the 2001 schedule was issued; however, the site has undergone substantial delays in destroying its stockpile primarily due to a safety-related incident in July 2002. If additional delays occur at the Tooele site, it could also exceed its next milestone as well. Table 6 shows the status of the incineration sites that will miss 2001 schedule milestones. The delays at the incineration sites have resulted from various long-standing issues, which the Army has not been able to effectively anticipate or control because it does not have a process to identify and mitigate them. An effectively managed program would have an approach, such as lessons learned, to identify and mitigate issues. Although the program now has extensive experience with destroying agents at two sites, the Chem-Demil Programmatic Lessons Learned Program has been shifted to individual contractors from a headquarters centralized effort. In September 2002, we reported on the effectiveness of the centralized lessons learned program and found it to be generally effective, but it should be improved and expanded. By decentralizing the program, it is uncertain how knowledge will be leveraged between sites to avoid or lessen potential delays due to issues that have previously occurred. In addition, program officials told us that they were concerned that lessons from the closure at Johnston Atoll were not being captured and saved for future use at other sites. Many delays have resulted from incidents during operations, environmental permitting, community protection, and funding issues. This continues to be a trend we identified in previous reports on the program. The following examples illustrate some of the issues that have caused delays at incineration sites since 2001: Incidents during operations: Agent destruction operations at Tooele were suspended from July 2002 to March 2003 because of a chemical incident involving a plant worker who came into contact with a nerve agent while performing routine maintenance. Subsequent investigations determined that this event occurred because some procedures related to worker safety were either inadequate or not followed. A corrective action plan, which required the implementation of an improved safety plan, was instituted before operations resumed. Since it resumed operations in March 2003, Tooele has experienced several temporary shutdowns. (These shutdowns are discussed further in app. II.) Environmental permitting: The start of agent destruction operations at Umatilla and Anniston sites has been delayed because of several environmental permitting issues. Delays at the Umatilla site have resulted from several unanticipated engineering changes related to reprogramming software and design changes that required permit modifications. An additional delay occurred at the Umatilla site when the facility was temporarily shut down in October 2002 by state regulators because furnaces were producing an unanticipated high amount of heavy metals during surrogate agent testing. The testing was suspended until a correction could be implemented. Delays at the Anniston site occurred because state environmental regulators did not accept test results for one of the furnaces because the subcontractor did not follow state permit- specified protocols. Community protection: Destruction operations at the Anniston site have been delayed because of concerns about emergency preparedness for the surrounding communities. These concerns included the inadequacy of protection plans for area schools and for special needs residents. Although we reported on this issue in July 1996 and again in August 2001 and a senior DOD official identified it as a key concern in September 2001, the Army was unable to come to a satisfactory resolution with key state stakeholders prior to the planned January 2003 start date. As of June 2003, negotiations were still ongoing between the Army and key public officials to determine when destruction operations could begin. Funding: Systemization and closure activities were delayed at Pine Bluff and Johnston Atoll sites, respectively, because program funds planned for demilitarization were redirected in fiscal year 2002 by DOD to pay for $40.5 million for additional community protection equipment for Anniston. This was an unfunded budget expense, and the Army reduced funds for the Pine Bluff site by $14.9 million, contributing to construction and systemization milestones slipping 1 year. The Pine Bluff site was selected because the loss of funding would not delay the projected start of operations during that fiscal year. Program officials told us that the total program cost of this schedule slip would ultimately be $90 million. Additionally, funds were reduced for the Johnston Atoll site by $25.1 million because it was in closure. According to an Army official, delays increase program costs by approximately $250,000 to $300,000 a day or about $10 million per month. Since 2001, delays have caused cost increases of $256 million at the incineration sites shown in table 7. Due to the delays, the Army is in the process of developing new milestones that would extend beyond those adopted in 2001. According to an Army official, the program will use events that have occurred since 2001 to present new cost estimates to DOD in preparation for the fiscal year 2005 budget submission. Program officials told us that they estimate costs have already increased $1.2 billion. This estimated increase is likely to rise further as additional factors are considered. The two bulk-agent only sites, Aberdeen and Newport, have experienced delays but have not breeched their milestones. The schedules were revised in response to concerns about the continued storage of the chemical stockpile after the events of September 11, 2001. In 2002, DOD approved the use of a modified process that will accelerate the rate of destruction at these two sites. For example, the Army estimates that the modified process will reduce the length of time needed to complete destruction of the blister agent stockpile at Aberdeen from 20 months to 6 months. The Army estimates that this reduction, along with other changes, such as the off-site shipping of a waste byproduct, will reduce the scheduled end of operations by 5 years, from 2008 to 2003. Similarly, projections for agent destruction operations at Newport were reduced from 20 months to 7 months, and the destruction end date moved up from 2009 to 2004. While the Aberdeen site did begin destruction operations, as of June 2003, it had only achieved a peak rate of 2 containers per day, which is far less than the projected peak daily rate of 12, and had experienced unanticipated problems removing residual agent from the containers. After 2 months of processing, Army officials said it had initially processed 57 of the 1,815 containers in Aberdeen’s stockpile and will have to do additional processing of these containers because of a higher amount of unanticipated hardened agent. Even if the peak daily rate of 12 is achieved, the site will not meet the October 2003 Army estimate. At the Newport site, construction problems will delay the start of operations, missing the program manager’s October 2003 estimate for starting agent destruction operations. Another possible impediment to starting operations is the program’s efforts to treat the waste byproduct at a potential off-site disposal facility in Ohio. These efforts have met resistance from some community leaders and residents near the potential disposal site. If the Army is unable to use an off-site facility, the disposal may have to be done on site, requiring the construction of a waste byproduct treatment facility, further causing delays and increasing costs. Schedule milestones were not adopted for the Pueblo and Blue Grass sites in the 2001 schedule because DOD had not selected a destruction technology. Subsequently, DOD selected destruction technologies for these sites; however, these decisions were made several months beyond the dates estimated in 2001. For example, while program officials indicated that the technology decision for the Kentucky site would be made by September 2002, the decision was not made until February 2003. Significantly, DOD announced initial schedule milestones for these two sites that extended beyond the extended April 2012 deadline of the CWC. According to DOD officials, these schedules are preliminary and will be reevaluated after the selected contractors complete their initial design of the facilities. Plans for these sites are immature, and changes are likely to occur as they move closer to the operations phase still at least several years away. DOD and the Army have not implemented a comprehensive risk management approach that would proactively anticipate and influence issues that could adversely affect the program’s progress. The program manager’s office drafted a risk management plan in June 2000, but the plan has not been formally approved or implemented. According to program officials, a prior program official drafted the plan and subsequent officials did not approve or further develop the plan. The draft plan noted that DOD’s acquisition rules require program managers to establish a risk management plan to identify and control risk related to performance, cost, and schedule. Such a plan would allow managers to systematically identify, analyze, and influence the risk factors and could help keep the program within its schedule and cost estimates. DOD and Army officials have given several reasons for not having an overall risk management plan. A DOD official indicated that the approach that has been used to address program problems has been crisis management, which has forced DOD to react to issues rather than control them. The deputy program manager stated that the program’s focus has been on managing individual sites by implementing initiatives to improve contractor performance as it relates to safety, schedule, and cost. The official also said that establishing a formal, integrated risk management plan has not been a priority. However, an official from the program manager’s office said the infrastructure is in place to finalize an integrated risk management plan by October 2003, which coincides with the date CMA takes over leadership of the program. However, due to the transition that the organization is undergoing, the status of this effort is uncertain. The Army defines its risk management approach as a process for identifying and addressing internal and external issues that may have a negative impact on the program’s progress. A risk management approach has five basic steps, which assist program leaders in effective decision making for better program outcomes. Simply stated, the first step is to identify those issues that pose a risk to the program. For example, a problem in environmental permitting can significantly delay the program schedule. The second step is to analyze the risks identified and prioritize the risks using established criteria. The third step is to create a plan for action to mitigate the prioritized risks in some order of importance. The fourth step is to track and validate the actions taken. The last step is to review and monitor the outcomes of the actions taken to ensure their effectiveness. Additional remedies may be needed if actions are not successful or the risks have changed. Risk management is a continuous, dynamic process and must become a regular part of the leadership decision process. Without developing such an approach, the Chem-Demil Program will continue to manage by addressing issues as they arise and not by developing strategies or contingency plans to meet program issues. As the program complexity increases with new technologies and more active sites, a comprehensive risk management approach, as the acquisition regulations require, would facilitate program success and help control costs. Such a proactive approach would allow the program to systematically identify, analyze, and manage the risk factors that could hamper its efforts to destroy the chemical stockpile and help keep it within its schedule and cost estimates. For more than a decade, the Chem-Demil Program has struggled to meet schedule milestones—and control the enormous costs—for destroying the nation’s chemical weapons stockpile. The program will also miss future CWC deadlines. Despite several reorganizations of its complex structure, the program continues to flounder. Program leadership at both the oversight and the program manager levels has shifted frequently, contributing to the program’s continued instability, ineffective decision making, and weak accountability. The repeated realignments of the program have done little to resolve its awkward, hydra-like structure in which roles and responsibilities continue to be poorly defined, multiple lines of authority exist, and coordination between various entities is poor. These shifts and realignments have taken place without the benefit of a comprehensive strategy and an implementation plan that could help the program clearly define its mission and begin working toward its goals effectively. If the program had these key pillars, such as a strategy to guide it from its inception and an implementation plan to track performance, it would be in a better position to achieve desired outcomes. The program will have a low probability of achieving its principal goal of destroying the nation’s chemical weapons stockpile in a safe manner within the 2001 schedule unless DOD and Army leadership take immediate action to clearly define roles and responsibilities throughout the program and implement an overarching strategic plan. The Chem-Demil Program is entering a crucial period as more of its sites move into the operations phase. As this occurs, the program faces potentially greater challenges than it has already encountered, including the possibilities of growing community resistance, unanticipated technical problems, and serious site incidents. Unless program leadership is proactive in identifying potential internal and external issues and preparing for them, or in reducing the chances that they will occur, the program remains at great risk of failing to meet its scheduled milestones and the deadlines set by the CWC. These problems, and subsequent delays, are likely to continue plaguing the program unless it is able to incorporate a comprehensive risk management system into its daily routine. Such a proactive approach would allow the program to systematically identify, analyze, and manage the risk factors that could hamper its efforts to destroy the chemical stockpile and help keep it within its schedule and cost estimates. Without the advantage of having a risk management tool, the program will continue to be paralyzed by delays caused by unanticipated issues, resulting in spiraling program costs and missed deadlines that prolong the dangers of the chemical weapons stockpile to the American public. We recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics, in conjunction with the Secretary of the Army, to develop an overall strategy and implementation plan for the chemical demilitarization program that would: articulate a program mission statement, identify the program’s long-term goals and objectives, delineate the roles and responsibilities of all DOD and Army offices, establish near-term performance measures, and implement a risk management approach that anticipates and influences internal and external factors that could adversely impact program performance. In written comments on a draft of this report, DOD concurred with our recommendations. In concurring with our recommendation to develop an overall strategy and implementation plan, DOD stated that it is in the initial stages of developing such a plan and estimates that it will be completed in fiscal year 2004. In concurring with our recommendation to implement a risk management approach, DOD stated that the CMA will review the progress of an evaluation of several components of its risk management approach within the next 120 days. At that time, DOD will evaluate the outcome of this review and determine any appropriate action. We believe these actions should improve program performance provided DOD’s plan incorporates a clearly articulated mission statement, long-term goals, well-delineated assignment of roles and responsibilities, and near- term performance measures and the Army’s review of its risk management approach focuses on anticipating and influencing internal and external factors that could adversely impact the Chem-Demil Program. DOD’s comments are printed in appendix III. DOD also provided technical comments that we incorporated where appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology and Logistics; the Secretary of the Army; and the Director, Office of Management and Budget. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. For any questions regarding this report, please contact me at (512) 512-6020. Key contributors to this report were Donald Snyder, Rodell Anderson, Bonita Oden, John Buehler, Pam Valentine, Steve Boyles, Nancy Benco, and Charles Perdue. This report focuses on the Chemical Demilitarization (Chem-Demil) Stockpile Program, one of the components of the Chem-Demil program. Other components, such as the Chemical Stockpile Emergency Preparedness Program, were only discussed to determine their effects on the destruction schedule. To determine if recent changes in the stockpile program’s management and oversight have been successful in improving program progress, we interviewed numerous officials and reviewed various documents. Through a review of previous and current organizational charts, we noted a number of changes in the program from 1986 to the present. We interviewed Department of Defense (DOD) and Army officials to determine what effect organizational changes and management initiatives had on the program and to determine if a strategic plan had been developed to manage the program. We identified organizational changes between DOD and the Army, determined the rationale for changes, and ascertained the effect of these changes on program performance. We reviewed Defense Acquisition System directives to determine the roles and responsibilities of DOD and the Army in managing the Chemical Demilitarization Program. We assessed Chem-Demil Program’s Acquisition Strategy and Management and Program Performance plans to identify elements of a strategic plan and evaluated and compared them to the general tenets and management principles embraced by the Government Performance and Results Act. Additionally, we interviewed Office of Management and Budget officials to discuss their assessment of the program’s performance and its adherence to a results-oriented management approach and reviewed DOD directives and regulations to determine the criteria for strategic planning. To determine the progress that DOD and the Army have made in meeting revised 2001 cost and schedule estimates and Chemical Weapons Convention (CWC) deadlines, we interviewed relevant program officials and reviewed a number of documents. We reviewed the Army’s current program office estimates to destroy the chemical weapons stockpile and weekly and monthly destruction schedules to understand how sites will perform and synchronize activities to meet milestones. We interviewed DOD’s Cost Analysis Improvement Group to determine how DOD developed estimates for the 2001 milestone schedules for each site. However, we did not independently evaluate the reliability of the methodology the Cost Analysis Improvement Group used to develop its estimate. Further, we interviewed program officials to determine the status of the destruction process at incineration and neutralization sites and the impact of delays on schedule and cost. We reviewed Selected Acquisition Reports and Acquisition Program Baselines to identify the increase in program cost estimates in 1998 and 2001 and to determine the relationship between changes to schedule milestones and increased program cost. Our analysis identified the effect that schedule delays would have on schedule milestones at incineration and neutralization sites. Additionally, the analysis also identified types of schedule delays and the impact on program cost. Through interviews with program officials, we discussed the status of factors that increase program life-cycle cost estimates. We examined the Chem-Demil Program’s draft risk management plans to determine if the Army had developed a comprehensive risk management approach to address potential problems that could adversely affect program schedules, cost, and safety. Through an analysis of other risk management plans, we identified elements of a risk management process. We reviewed CWC documents to determine deadlines for the destruction of the chemical weapons stockpile. We interviewed program officials to discuss the potential implications of not meeting interim milestones and CWC deadlines. During the review, we visited and obtained information from the Office of the Secretary of Defense, the Assistant Secretaries of the Army (Installations and Environment) and (Acquisition, Logistics, and Technology); the Office of Management and Budget, the Department of State, the Federal Emergency Management Agency, and the DOD Inspector General in Washington, D.C. and met with the Director of Chemical Materials Agency and the Program Managers for Chemical Demilitarization and Assembled Chemical Weapons Assessment in Edgewood, Maryland. We also met project managers, site project managers, state environmental offices, and contractors associated with disposal sites in Aberdeen, Maryland; Anniston, Alabama; Umatilla, Oregon; and Pine Bluff, Arkansas. We also interviewed Federal Emergency Management Agency officials concerning funding of emergency preparedness program activities. We conducted our review from August 2002 to June 2003 in accordance with generally accepted government auditing standards. When developing schedules, the Army divides the demilitarization process into five major phases. The five major phases are facility design, construction, systemization, operations, and closure. Some activities of one phase may overlap the preceding phase. The nine sites are at different phases of the process. During the design phase, the Army obtains the required environmental permits. The permits are required to comply with federal, state, and local environmental laws and regulations to build and operate chemical disposal facilities. The permits specify construction parameters and establish operations guidelines and emission limitations. Subsequent engineering changes to the facility are incorporated into the permits through formal permit modification procedures. During this phase, the Army originally solicited contract proposals from systems contractors to build, and operate, the chemical demilitarization facility and selected a systems contractor. Now, the Army uses a design/build approach, whereby the contractor completes both phases. The Army originally provided the systems contractors with the design for the incineration facilities; however, systems contractors developed the facility design for the neutralization facilities. During the construction phase, the Army, with the contractor’s input, develops a master project schedule that identifies all major project tasks and milestones associated with site design, construction, systemization, operations, and closure. For each phase in the master project schedule, the contractor develops detailed weekly schedules to identify and sequence the activities necessary to meet contract milestones. Army site project managers review and approve the detailed schedules to monitor the systems contractor’s performance. After developing the schedules, the contractor builds a disposal site and acquires, installs, and integrates the necessary equipment to destroy the stockpile and begins hiring, training, and certifying operations staff. During systemization, the systems contractor also prepares and executes a systemization implementation plan, which describes how the contractor will ensure the site is prepared to conduct agent operations. The contractor begins executing the implementation plan by testing system components. The contractor then tests individual systems to identify and correct any equipment flaws. After systems testing, the contractor conducts integrated operations tests. For example, the contractor uses simulated munitions to test the rocket processing line from receipt of the munitions through incineration. Army staff observe and approve key elements of each integrated operations test, which allows the contractor to continue the systemization process. Once the Army approves the integrated operations test, the contractor tests the system by conducting mini and surrogate trial burns. During minitrial burns, the contractor adds measured amounts of metals to a surrogate material to demonstrate the system’s emissions will not exceed allowable rates. In conducting surrogate trial burns, the contractor destroys nonagent compounds similar in makeup to the agents to be destroyed at the site. By using surrogate agents, the contractor tests destruction techniques without threatening people or the environment. Both the minitrial burn test results and the surrogate trial burn test results are submitted to environmental regulators for review and approval. When the environmental regulators approve the surrogate trial burns, the contractor conducts an Operational Readiness Review to validate standard operating procedures and to verify the proficiency of the workforce. During the Operational Readiness Review, the workforce demonstrates knowledge of operating policies and procedures by destroying simulated munitions. After systemization, the contractor begins the operations phase; that is, the destruction of chemical munitions. The operations phase is when weapons and agents are destroyed. Weapons are destroyed by campaign, which is the complete destruction of like chemical weapons at a given site. Operations for incineration and alternative technologies differ. The following examples pertain to an incineration site. In its first campaign, Umatilla plans to destroy its stockpile of M55 rockets filled with one type of nerve agent. Then a second campaign is planned to destroy its stockpile of M55 rockets filled with another type of nerve agent. After each campaign, the site must be reconfigured. The Army refers to this process as an agent changeover. During the changeover, the contractor decontaminates the site of any prior nerve agent residue. The contractor then adjusts the monitoring, sampling, and laboratory equipment to test for the next nerve agent. The contractor also validates the operating procedures for the second agent destruction process. Some operating procedures may be rewritten because the processing rates among chemical agents differ. Although the operations staff have been trained and certified on specific equipment, the staff are re-trained on the operating parameters of processing VX agent. In the third and forth campaigns at Umatilla, the contractor plans to destroy 8-inch VX projectiles and 155-millimeter projectiles, respectively. Because the third campaign involves a different weapon than the second campaign (i.e., from rockets in the second campaign to projectiles in the third campaign), the contractor will replace equipment during the changeover. For example, the machine that disassembles rockets will be replaced with a machine that disassembles projectiles. Additionally, a changeover may require certain processes to be bypassed. For instance, if a changeover involved changing processes from weapons with explosives to weapons without explosives, the explosives removal equipment and deactivation furnace would be bypassed. For the changeover to the fourth campaign at Umatilla, the contractor will adjust equipment to handle differences in weapon size. For example, the contractor will adjust the conveyor system to accommodate the 155-millimeter projectiles. The contractor also will change the location of monitoring equipment. After destruction of the stockpile, the systems contractor begins closing the site. During the closure phase, the contractor decontaminates and disassembles the remaining systems, structures, and components used during the demilitarization effort, and the contractor performs any other procedures required by state environmental regulations or permits. The contractor removes, disassembles, decontaminates, and destroys the equipment, including ancillary equipment such as pipes, valves, and switches. The contractor also decontaminates buildings by washing and scrubbing concrete surfaces. Additionally, the contractor removes and destroys the surface concrete from the walls, ceilings, and floors. With the exception of the Umatilla site, the structures will remain standing. Any waste generated during the decontamination process is destroyed. The Army’s nine chemical demilitarization sites are in different phases of the demilitarization process. The Johnston Atoll site completed the destruction of its stockpile and closure is almost complete. The sites at Tooele, Utah, and Aberdeen, Maryland, are in the operations phase, each using different technologies, to destroy chemical agent and munitions. The remaining six facilities are in systems design, construction and/or systemization. Table 8 provides details on the status of each of the nine chemical demilitarization sites. Chemical Weapons: Lessons Learned Program Generally Effective but Could Be Improved and Expanded. GAO-02-890. Washington, D.C.: September 10, 2002. Chemical Weapons: FEMA and Army Must Be Proactive in Preparing States for Emergencies. GAO-01-850. Washington, D.C.: August 13, 2001. Chemical Weapons Disposal: Improvements Needed in Program Accountability and Financial Management. GAO/NSIAD-00-80. Washington, D.C.: May 8, 2000. Chemical Weapons: DOD Does Not Have a Strategy to Address Low-Level Exposures. GAO/NSIAD-98-228. Washington, D.C.: September 23, 1998. Chemical Weapons Stockpile: Changes Needed in the Management of the Emergency Preparedness Program. GAO/NSIAD-97-91. Washington, D.C.: June 11, 1997. Chemical Weapons and Materiel: Key Factors Affecting Disposal Costs and Schedule. GAO/T-NSIAD-97-118. Washington, D.C.: March 11, 1997. Chemical Weapons Stockpile: Emergency Preparedness in Alabama Is Hampered by Management Weaknesses. GAO/NSIAD-96-150. Washington, D.C.: July 23, 1996. Chemical Weapons Disposal: Issues Related to DOD’s Management. GAO/T-NSIAD-95-185. Washington, D.C.: July 13, 1995. Chemical Weapons: Army’s Emergency Preparedness Program Has Financial Management Weaknesses. GAO/NSIAD-95-94. Washington, D.C.: March 15, 1995. Chemical Stockpile Disposal Program Review. GAO/NSIAD-95-66R. Washington, D.C.: January 12, 1995. Chemical Weapons: Stability of the U.S. Stockpile. GAO/NSIAD-95-67. Washington, D.C.: December 22, 1994. Chemical Weapons Disposal: Plans for Nonstockpile Chemical Warfare Materiel Can Be Improved. GAO/NSIAD-95-55. Washington, D.C.: December 20, 1994. Chemical Weapons: Issues Involving Destruction Technologies. GAO/T-NSIAD-94-159. Washington, D.C.: April 26, 1994. Chemical Weapons Destruction: Advantages and Disadvantages of Alternatives to Destruction. GAO/NSIAD-94-123. Washington, D.C.: March 18, 1994. Arms Control: Status of U.S.-Russian Agreements and the Chemical Weapons Convention. GAO/NSIAD-94-136. Washington, D.C.: March 15, 1994. Chemical Weapon Stockpile: Army’s Emergency Preparedness Program Has Been Slow to Achieve Results. GAO/NSIAD-94-91. Washington, D.C.: February 22, 1994. Chemical Weapons Storage: Communities Are Not Prepared to Respond to Emergencies. GAO/T-NSIAD-93-18. Washington, D.C.: July 16, 1993. Chemical Weapons Destruction: Issues Affecting Program Cost, Schedule, and Performance. GAO/NSIAD-93-50. Washington, D.C.: January 21, 1993. Chemical Weapons Destruction: Issues Related to Environmental Permitting and Testing Experience. GAO/T-NSIAD-92-43. Washington, D.C.: June 16, 1992. Chemical Weapons Disposal. GAO/NSIAD-92-219R. Washington, D.C.: May 14, 1992. Chemical Weapons: Stockpile Destruction Cost Growth and Schedule Slippages Are Likely to Continue. GAO/NSIAD-92-18. Washington, D.C.: November 20, 1991. Chemical Weapons: Physical Security for the U.S. Chemical Stockpile. GAO/NSIAD-91-200. Washington, D.C.: May 15, 1991. Chemical Warfare: DOD’s Effort to Remove U.S. Chemical Weapons From Germany. GAO/NSIAD-91-105. Washington, D.C.: February 13, 1991. Chemical Weapons: Status of the Army’s M687 Binary Program. GAO/NSIAD-90-295. Washington, D.C.: September 28, 1990. Chemical Weapons: Stockpile Destruction Delayed at the Army’s Prototype Disposal Facility. GAO/NSIAD-90-222. Washington, D.C.: July 30, 1990. Chemical Weapons: Obstacles to the Army’s Plan to Destroy Obsolete U.S. Stockpile. GAO/NSIAD-90-155. Washington, D.C.: May 24, 1990.
Congress expressed concerns about the Chemical Demilitarization Program cost and schedule, and its management structure. In 2001, the program underwent a major reorganization. Following a decade long trend of missed schedule milestones, in September 2001, the Department of Defense (DOD) revised the schedule, which extended planned milestones and increased program cost estimates beyond the 1998 estimate of $15 billion to $24 billion. GAO was asked to (1) examine the effect that recent organization changes have had on program performance and (2) assess the progress DOD and the Army have made in meeting the revised 2001 cost and schedule and Chemical Weapons Convention (CWC) deadlines. The Chemical Demilitarization Program remains in turmoil because a number of long-standing leadership, organizational, and strategic planning issues remain unresolved. The program lacks stable leadership at the upper management levels. For example, the program has had frequent turnover in the leadership providing oversight. Further, recent reorganizations have done little to reduce the complex and fragmented organization of the program. As a result, roles and responsibilities are often unclear and program actions are not always coordinated. Finally, the absence of a comprehensive strategy leaves the program without a clear road map and methods to monitor program performance. Without these key elements, DOD and the Army have no assurance of meeting their goal to destroy the chemical stockpile in a safe and timely manner, and within cost estimates. DOD and the Army have already missed several 2001 milestones and exceeded cost estimates; the Army has raised the program cost estimates by $1.2 billion, with other factors still to be considered. Almost all of the incineration sites will miss the 2001 milestones because of schedule delays due to environmental, safety, community relations, and funding issues. Although neutralization sites have not missed milestones, they have had delays. DOD and the Army have not developed an approach to anticipate and influence issues that could adversely impact program schedules, cost, and safety. Unless DOD and the Army adopt a risk management approach, the program remains at great risk of missing milestones and CWC deadlines. It will also likely incur rising costs and prolong the public's exposure to the chemical stockpile.
MCPP-N consists of six climate-controlled caves spread across central Norway that are used for the storage of U.S.-owned munitions and ground equipment. In addition, the Norwegian Defence Logistics Organization manages two aviation maintenance facilities that contain U.S.-owned aviation support equipment, co-located at Norwegian airfields, and a pier used for offloading equipment from ships. Figure 1 identifies the locations of these caves, airfield maintenance facilities, and the pier. According to Marine Corps officials, the Norwegian government completed construction of a new pier near the cave at Hammernesodden in July 2014 to facilitate the ability of large U.S. ships to transport large combat vehicles and other equipment into central Norway. Marine Corps and Norwegian officials stated that this pier was paid for solely by the Norwegian government at a cost of approximately $22.5 million (see figure 2). In August 2014 the Navy and the Marine Corps transported a large shipment of combat and other equipment to Norway in support of a transformation to a Marine Air Ground Task Force capability, according to Marine Corps Blount Island Command and Norwegian officials. The equipment transported for storage in the six caves included variants of the M1114 High Mobility Multipurpose Wheeled Vehicle (HMMWV), M1A1 Main Battle Tanks, Tank Retrievers, Armored Breeching Vehicles, Amphibious Assault Vehicles, and several variants of the Medium Tactical Vehicle 7 ½-ton trucks. The photographs below (figs. 3 and 4) show the offloading of the USNS Williams at the newly constructed pier at Hammernesodden and provide an example of the type of ground equipment used to support a Marine Air Ground Task Force that can be found at MCPP-N caves. As part of the 2005 memorandum of understanding between the United States and Norway, the United States will provide military equipment to be stored in the Norwegian-built caves, and Norway will provide the infrastructure to support the program and will maintain the equipment provided by the United States. Both countries agree to share the program’s operations and maintenance expenses. Under the cost-sharing portion of the agreement, each country agrees to match the other’s financial contributions up to an agreed upon threshold, which in fiscal year 2014 was $10.5 million each. The cost-sharing agreement does set a maximum contribution by Norway, limiting its contribution either to half of the total costs incurred or to the ceiling set in U.S. dollars to be negotiated, whichever is less. Table 1 below illustrates the Marine Corps and Navy’s total annual contributions covering the actual direct and indirect programmatic costs for MCPP-N from fiscal years 2010 to 2014. According to officials from the office of the Deputy Commandant of the Marine Corps for Installations and Logistics, the direct costs for MCPP-N include all operations and maintenance expenses incurred by the Marine Corps for both ground equipment and aviation support equipment. In addition, indirect costs cover administrative expenses incurred by Blount Island Command and other Marine Corps organizations as part of the execution of the program. Five organizations are responsible for the support and operation of MCPP-N. Four Marine Corps organizations are responsible for the planning, funding, and management of MCPP-N. The fifth organization, the Norwegian Defence Logistics Organization, is responsible for providing program infrastructure and maintaining MCPP-N equipment prepositioned in Norway. Table 2 below summarizes the primary roles and responsibilities of each organization. The Marine Corps is changing its mix of equipment to address the U.S. European and U.S. Africa commands’ strategic and theater-specific operational requirements. Both combatant commands have identified in their contingency plans the need for prepositioned equipment within their respective geographic areas to support their operational requirements and capabilities. The U.S. European Command’s Theater Posture Plan identifies Trondheim, Norway, as a stand-alone prepositioning site for MCPP-N capable of providing equipment to a wide variety of operations. In addition, officials from the U.S. European Command stated that they have developed and are continuing to develop contingency plans that specifically call upon the Marine Corps to maintain prepositioned equipment in Europe to support strategic and theater-specific operational requirements. Although those U.S. Africa Command plans that reference a need for access to prepositioned equipment do not specifically identify MCPP-N as an asset to meet that need, both Marine Corps and U.S. Africa Command officials stated that MCPP-N has served and can continue to serve as a global support asset to meet combatant command requirements. Both U.S. European and U.S. Africa Command identify prepositioned equipment in their contingency plans as providing capabilities to support efforts such as crisis response, humanitarian and disaster assistance, and counter-terrorism activities. The Marine Corps reported that it routinely uses MCPP-N equipment sets to support European training and exercises, including the biennial Cold Response exercise in Norway, which trains U.S., Norwegian, and other NATO-allied military forces to operate in cold weather environments; and an annual training activity to carry out security cooperation efforts with the Marine Corps’ Black Sea Rotational Force and other foreign militaries. Marine Corps officials stated that MCPP-N equipment has been used to support training and exercises across the African continent, including the Shared Accord and African Lion exercises, and could be used for other assistance efforts in Africa. From February 1991 to March 2014 the Marine Corps reported that it withdrew equipment from MCPP-N caves in support of training, exercises, and operations within Europe, Africa, Iraq, and Afghanistan. These principal end items included tanks, amphibious armored vehicles, light armored vehicles, trucks, and tractors. Marine Corps officials estimated during this period that more than 3,000 principal end items were withdrawn from MCPP-N in support of various training and exercise events, and more than 2,000 principal end items were withdrawn in support of military operations in Iraq and Afghanistan. Further, officials estimated more than 150 principal end items were withdrawn to support other contingency operations within the European Command’s geographic area, ranging from supporting a Special Purpose Marine Air Ground Task Force in Spain to providing humanitarian assistance in Turkey and the Republic of Georgia. Marine Corps officials estimated that about 50,000 non-principal end items such as sandbags, rations, tents, and cots were withdrawn from MCPP-N over the same period to support various training and exercises as well as contingency operations in Europe, Africa, Iraq, and Afghanistan. In addition, Marine Corps officials reported that MCPP-N equipment was used to provide humanitarian and disaster relief assistance in response to a major earthquake in Turkey in 2011 and wildfires in Russia in 2010. Further, Marine Corps officials stated that the same equipment used for training in cold weather environments could also support DOD’s Arctic Strategy for potential military operations in the Arctic regions, although officials stated that the Marine Corps has conducted no such operations to date. In January 2012 the Commandant of the Marine Corps issued planning guidance that called for MCPP-N to be able to support a Marine Air Ground Task Force. Subsequently, the Marine Corps began its effort to change the mix of equipment found at MCPP-N storage facilities. This change occurred as a response to DOD’s efficiency initiatives, to strengthen the effectiveness of MCPP-N, and to bolster the Marine Corps’ prepositioning capabilities. In addition, as a result of the Department of the Navy’s decision to discontinue in 2012 a maritime prepositioning ship squadron located within the Mediterranean, the U.S. European Command has heightened its reliance on MCPP-N to support its prepositioning requirements. The Marine Corps guidance specifically calls for MCPP-N to be able to support a force of approximately 4,500 Marines to respond to the first 2 weeks of combat at the mid-intensity conflict level of the range of military operations, and to support theater security cooperation activities. Marine Corps officials stated that the equipment set at MCPP-N is intended to provide the capabilities to enable a Marine Corps force to respond to any type of crisis or operation globally, and thus MCPP-N is not assigned to any specific combatant command. To ensure that MCPP-N effectively meets the Marine Corps’ needs and better aligns with combatant command strategic and theater-specific operational requirements, the Marine Corps annually updates the mix of equipment found at MCPP-N storage facilities in Norway. The current prepositioning objective, which was last revised in February 2015, calls for MCPP-N to support crisis response-type missions and theater security cooperation engagement activities for the combatant commands. It also calls for an equipment set that includes combat equipment such as HMMWVs, light armored vehicles, amphibious armored vehicles, and Abrams tanks necessary to support mid-intensity conflicts. Marine Corps officials stated that as of March 2015 MCPP-N had acquired 63 percent of the equipment it needed to meet its current prepositioning objective. Marine Corps officials we interviewed observed, however, that this attainment level of equipment may change periodically, depending on the Marine Corps’ identified prepositioning needs, as the prepositioning objective is generally revised on an annual basis. Marine Corps cost estimates for sustaining the equipment to support a Marine Air Ground Task Force capability may not be fully reliable, in that they do not fully meet the four general characteristics for reliable cost estimating—that is, being accurate, well-documented, credible, and comprehensive—as identified in GAO’s Cost Estimating and Assessment Guide. Reliable cost estimates provide the basis for informed investment decision making and realistic budget formulation and program resourcing. Each year, the Logistics Plans and Operations Branch of the Deputy Commandant of the Marine Corps for Installations and Logistics consolidates the Marine Corps’ portion of direct program costs for MCPP-N in support of developing a consolidated and comprehensive budget estimate for the program objective memorandum. This includes a 5-year budget projection to fund program initiatives—such as the cost- sharing agreement with Norway—and theater security cooperation requirements for the U.S. European and U.S. Africa Commands. Current Marine Corps guidance requires budget estimates to contain defendable funding requirements that extend across multiple fiscal years and support both short- and long-term program objectives. The Marine Corps’ approved program objective memorandum for MCPP-N for fiscal years 2015 to 2019 includes budget estimates (see table 3) for direct costs for operations and maintenance. According to GAO’s Cost Estimating and Assessment Guide, the cost estimate is a critical element in the budgeting process that helps decision makers to evaluate resource requirements at milestones and other important decision points. Cost estimates establish and defend budgets and drive affordability analyses. The guide identifies four characteristics of reliable cost estimates—that is, they should be accurate, well- documented, credible, and comprehensive. Based on our review of the budget estimates identified in the program objective memorandum budget for MCPP-N for fiscal years 2015 to 2019, we found that the Marine Corps’ cost estimates for MCPP-N (1) partially met the “accurate” characteristic; (2) partially met the “well-documented” characteristic; (3) did not meet the “credible” characteristic; and (4) partially met the “comprehensive” characteristic of a reliable estimate. Table 4 provides more information on our assessment of the program objective memorandum budget for MCPP-N based on the four characteristics. We found that the cost estimates partially met the characteristic for accuracy in that the Marine Corps program objective memorandum estimates for MCPP-N are updated as part of both the budget execution and program objective memorandum development processes. However, officials at the Office of the Deputy Commandant for Installations and Logistics told us that while Marine Corps components maintain source data or calculations, the components are not required to include this information as part of their cost estimate submissions. In addition, the Marine Corps does not track the variances between planned and actual costs to demonstrate the accuracy of its cost estimates and how the program is changing over time. We, therefore, could not assess whether the estimates were properly adjusted for inflation, nor could we check the results for accuracy. Without access to cost estimate details, the accuracy of the estimates cannot be determined. We found that the cost estimates partially met the well-documented characteristic in that the documentation provided by the Marine Corps does not include the source data used to develop the cost estimates for the program objective memorandum process. The documentation does not reflect the calculations performed or the estimating methodologies used by the Marine Corps, and does not describe the step-by-step process used to develop the estimate. Without well-documented cost estimates that include calculations and estimating methodologies, the Installations and Logistics office cannot provide complete answers to questions about the development of cost estimates or explain the reasons behind changes to the estimates over time. We found that estimates did not meet the characteristic for credibility. Based on our review of the program objective memorandum documentation for MCPP-N, we did not find documentation to demonstrate that systematic cross-checks of major cost elements were performed. Marine Corps Installation and Logistics officials stated that they do not perform cross-checks to assess the component and subordinate commands’ proposed budget estimates, and that if questions arise about the components and subordinate commands’ assumptions they engage in discussions to understand the reasoning behind them. However, this method used by the Installations and Logistics office to determine the accuracy of the components’ and subordinate commands’ assumptions cannot easily be replicated by an independent party. Such cross-checks of major cost elements can reveal whether applying a different cost-estimating method produces similar results. Without credible cost estimates, the Installations and Logistics office may not be able to determine the level of risk, uncertainty, or confidence associated with achieving proposed budget estimates. Consequently, management may have difficulty in identifying the available resources needed to address budget estimates in future program objective memorandum cycles to meet MCPP-N program requirements. We found the cost estimates to be partially comprehensive in that they included all types of program costs supporting MCPP-N but did not include a detailed funding plan on the costs to transform the program to support a Marine Air Ground Task Force. Marine Corps officials stated that they had no specific funding plan for the transformation because it was not a fiscally driven event. Officials stated, however, that identifying all costs associated with the transformation would prove difficult because they do not track all funding sources that support MCPP-N, such as transportation costs, which are tracked through a separate budget account. However, we found that they did identify some costs needed to support the transformation. For instance, officials identify costs of about $750,000 for fiscal year 2016 to employ three U.S. contractors in Norway to manage cryptographic equipment as part of a caretaker detachment. They also stated that between fiscal years 2012 and 2013 about $2 million was apportioned from another prepositioning program source to procure support items such as ancillary gear, lubricants, and batteries to operate and sustain new equipment supporting the Marine Air Ground Task Force. We also found that there was no standardized structure for collecting costs at a level of detail necessary to demonstrate that estimates are acceptable and reflect justification of resources. While Marine Forces generally are required to submit budget requests in a specific template to the Marine Corps’ Logistics Plans and Operations Branch, Blount Island Command is not required to use any template. Further, the cost- estimating documentation provided within the program objective memorandum submission did not include specific details on all factors and assumptions influencing costs, such as inflation indexes and potential costs arising from the purchase of parts to support new equipment sets such as tanks, amphibious assault vehicles, light armored vehicles, and communication capabilities. By not having a standardized structure for collecting cost estimates across organizations, the Installations and Logistics office cannot be certain that it has all the information necessary to ensure that the cost estimates provided are correct. Marine Corps officials stated that while they have taken some steps to improve their cost estimates for developing the budget, the current DOD guidance for developing the program objective memorandum does not include procedures that embody the characteristics of reliable cost estimating as identified in GAO’s prior work. In their view, better guidance would enable them to ensure that subordinate and component commands understand how to develop and document cost estimates. Officials stated that in response to recent changes with the consolidation of the program objective memorandum for prepositioning programs, the Marine Corps is drafting guidance for assisting in the development of budget plans. As of May 2015 the draft guidance had not been finalized, but Marine Corps officials stated they had no plans for the new guidance, which they expect to issue in the fall of 2015, to address the four characteristics of reliable cost estimates. The Marine Corps relies upon the Norwegian Equipment Information Management System for data needed to manage its equipment inventory at MCPP-N due to long-standing limitations in the Global Combat Support System - Marine Corps. Although the Marine Corps is working to improve its information system, these solutions will likely take several years to implement. The reliance on two different information systems, one of which is owned and operated by a foreign government, creates several management challenges and risks to data reliability for the Marine Corps. For example, it results in a time lag in the accuracy of information in the Marine Corps system until that system is manually updated with information from the Norwegian system—a process that is time- consuming and vulnerable to the risk of introduction of errors. However, the Marine Corps and the Norwegians have taken some steps to mitigate these risks for the interim until the Marine Corps system is capable of replacing the Norwegian system. Additionally, relying on the Norwegian system for management information makes the Marine Corps vulnerable to any weaknesses that may exist within the Norwegian system. Nevertheless, the Marine Corps has not conducted a quality assurance review of the Norwegian system. Performing such a review would be consistent with Marine Corps regulations and federal internal control standards, and it would constitute a key step toward mitigating potential weaknesses in the Norwegian Equipment Information Management System. The Marine Corps relies on two different information systems—(1) the Global Combat Support System - Marine Corps, and (2) the Norwegian Equipment Information Management System—to maintain visibility and accountability over prepositioned assets stored at MCPP-N. The Global Combat Support System - Marine Corps is the service’s enterprise-wide logistics information management system designed to serve as the backbone for all logistics information required by a Marine Air Ground Task Force. The Norwegian Equipment Information Management System is the data system that the Marine Corps and its Norwegian counterparts relied on to manage the ground equipment until 2012. Since July 2012, Blount Island Command has used the Global Combat Support System - Marine Corps as the official program of record for maintenance, spare parts, and cost data related to the management of MCPP-N equipment. However, due to limitations in the Marine Corps’ system, the Marine Corps continues to rely on the Norwegian system for key inventory management data. For example, the Global Combat Support System - Marine Corps lacks a warehousing application and other data management capabilities that Blount Island Command needs to effectively manage MCPP-N equipment stored in the Norwegian caves. As noted earlier, Marine Corps equipment is distributed among six caves. While the current version of the Global Combat Support System – Marine Corps can track which cave each piece of equipment is stored in, the system cannot record the equipment’s specific location within the cave. According to Norwegian officials, given the size of the caves, having the equipment’s specific location within each cave is essential for efficient equipment management. For example, the exact location of the equipment is critical to conducting efficient inventory checks and scheduled maintenance, and for withdrawing equipment for training exercises, humanitarian relief efforts, and contingency operations. As a result, the Marine Corps is reliant on the Norwegian System for this information. These limitations in the capabilities of the Global Combat Support System – Marine Corps are long-standing issues that the Marine Corps has recognized and is working to address. For example, as we reported in March 2014, according to Marine Corps Business System Integration Team officials, the initial plan was for the first version of the Global Combat Support System - Marine Corps, referred to as Increment 1, to include a warehousing application. However, as the rollout progressed through 2012, the officials stated that technical challenges, cost increases, and schedule delays caused the Marine Corps to lack sufficient funds to incorporate the warehousing application in Increment 1. Over the past several years we have issued a series of reports on the acquisition of major automated information systems. Our 2014 and 2015 reports included a review of the Global Combat Support System - Marine Corps and the associated challenges entailed in implementing Increment 1. According to officials from Blount Island Command, when it became apparent that Increment 1 would lack the needed warehousing application, the Marine Corps explored available options with the Norwegian Defence Logistics Organization. They elected to continue using the Norwegian Equipment Information Management System because it contained a warehousing and inventory management application. Marine Corps Headquarters and Blount Island Command officials stated that they intend to discontinue their reliance on the Norwegian system once a warehousing application becomes available on the Marine Corps’ system. In the meantime, they will rely on the two information systems to provide all the computer functions necessary for effective inventory management. Marine Corps officials stated that they did not know when the warehousing application will become available but expect it to be incorporated into a future increment, provided that there are available funds. While Marine Corps and Norwegian officials agree that retaining the Norwegian system is the best available option until the Marine Corps system is capable of completely replacing the Norwegian system, several challenges exist with respect to managing the interface between the two independent information systems. For example, because these systems are owned by separate governments, security concerns prevent the Marine Corps from allowing the systems to directly interact electronically. Consequently, inventory data from the Norwegian Equipment Information Management System is required to be manually extracted and uploaded into the Global Combat Support System – Marine Corps. This results in a time lag for the accuracy of information in the Marine Corps system, until it is manually updated with information from the Norwegian system—a process that is time-consuming and creates the risk for errors. The Marine Corps and the Norwegians have taken some steps to mitigate the risks in this process until the Marine Corps system replaces the Norwegian system. For example, Blount Island Command and Norwegian Defense Logistics Organization officials stated that they have a process to identify discrepancies between the two systems and then use a Marine Corps contractor to validate and enter inventory data from the Norwegian system to update the Marine Corps system. The following overview of the flow of inventory data as equipment arrives and is stored in the six Norwegian caves shows how data reliability challenges arise from the use of these two systems. Specifically, equipment designated for MCPP-N is assigned to Blount Island Command in the Global Combat Support System – Marine Corps. After undergoing a maintenance process at Blount Island Command to ensure that it is ready for use, the equipment is shipped to Norway for storage, and its shipment data are entered into the Marine Corps system. As was explained and demonstrated to us, when equipment arrives in Norway, the Norwegian Defence Logistics Organization records its receipt and inventory data in the Norwegian Equipment Information Management System. Once the Norwegian staff have assigned an equipment storage location—the designated cave and the equipment’s location within that cave—they transfer the key data elements that can be added to the Marine Corps system to an interim database known as the “Change Log.” Norwegian officials explained that one of their former staff created the Change Log feature in the Norwegian system in January 2014 to provide a mechanism whereby Norwegian staff could resolve discrepancies in the inventory data between the Marine Corps and Norwegian systems, in consultation with the Marine Corps contractors. The contractors receiving data from the Change Log are responsible for reviewing and validating the submitted equipment’s cave location and other inventory data before updating the record in the Global Combat Support System - Marine Corps. According to Norwegian officials, the Change Log has been instrumental in reducing the backlog of data discrepancies from more than 3,000 in January 2014 to fewer than 800 in November 2014. They explained that these discrepancies between the Marine Corps and Norwegian systems developed largely due to the changing mix of equipment prepositioned at MCPP-N to support a Marine Air Ground Task Force capability. Norwegian officials also reported mismatches in equipment serial numbers in both systems. They stated that as a result of such problems, a physical check of the serial number is often required to reconcile the data discrepancies between the two systems. Norwegian officials indicated that having its maintenance personnel provide additional information (documentation and photographs) to reconcile inventory data between the two systems negatively affects Norwegian maintenance operations because they have limited time maintenance resources. The Change Log serves as an application control for information entering the Global Combat Support System - Marine Corps. Standards for Internal Control in the Federal Government state that an application control should be installed at an application’s interface with other systems to ensure that all inputs are received and that valid outputs are correct and properly distributed. The Change Log constitutes a computerized “edit” built into the interface that helps the Marine Corps to review the format, existence, and reasonableness of the data from the Norwegian system before it enters the Marine Corps system. While the Change Log demonstrates an application control to mitigate data discrepancies between the two information management systems, it does not represent a long-term solution. Marine Corps and Norwegian officials anticipate that data discrepancies will continue to occur whenever there is a change in the cave location of equipment—such as when it returns to a cave after maintenance, a training exercise, humanitarian relief effort, or contingency operation—and also when it enters a cave for the first time, due to decisions to upgrade or change the mix of preposition equipment. Until the Global Combat Support System – Marine Corps is modified to include a warehousing application and can replace the Norwegian system, Marine Corps and Norwegian officials will continue to rely on two information management systems that generate ongoing data discrepancies and related data reliability challenges. The Marine Corps’ Blount Island Command conducts an annual quality assurance inspection to monitor, measure, and analyze data to ensure the effectiveness of MCPP-N. This inspection includes an assessment of the condition of the equipment and of the maintenance processes, along with a review of the inventory. However, the quality assurance inspection does not include a review of the Norwegian Equipment Information Management System, which serves as one of the key reporting systems for managing inventory data. Performing such a review would be consistent with Marine Corps regulations and federal internal control standards and would constitute a key step toward mitigating potential weaknesses in the Norwegian Equipment Information Management System. The data standards for information systems supporting Marine Corps prepositioning are provided in Marine Corps Order 3000.17 and state that data must be accurate and timely, must provide visibility of prepositioning materiel to planners at all levels, must be maintained to standards at the source of generation, and must be standardized for both afloat and ashore prepositioning programs. The Marine Corps Order also references the Marine Corps Technical Manual for MCPP-N, which states that Blount Island Command is responsible for developing and administering the Quality Assurance Program, and that the Quality Assurance Program shall include a review of shelf life items, scheduled maintenance and inventory cycles, and the adequacy of reporting systems. In addition, the Blount Island Command Quality System Manual states that Blount Island Command retains the overall responsibility for exercising sufficient control for processes performed by external organizations that provide logistics support for MCPP-N. Federal internal control standards also indicate the need for a quality assurance review of the Norwegian system. According to those standards, information systems have two main types of control activities—general and application controls. General controls are the policies and procedures that apply to all or a large segment of an entity’s information systems and facilitate their proper operation, to include system development and maintenance, security, management, logical and physical access, access security, and contingency planning. Application controls are incorporated directly into computer applications to achieve validity, completeness, accuracy, and confidentiality of transactions and data during application processing. According to the standards for internal controls, general and application controls over computer systems are interrelated. General controls support the functioning of application controls, and both are needed to ensure complete and accurate information processing. If general controls are inadequate, application controls are unlikely to function properly and could be overridden. A quality assurance review can entail reviewing selected general and application controls within the information system through activities such as: reviewing operating and database management systems; assessing security controls that protect the system and network from inappropriate access or unauthorized use; and performing tests on the system to ensure that it has the proper edit checks to review the format, existence, and reasonableness of data. Further, the Local Bilateral Agreement for MCPP-N, which serves as an internal working document between Blount Island Command and the Norwegian Defence Logistics Organization and outlines roles and responsibilities for each organization, identifies Blount Island Command as being responsible for completing annual quality assurance inspections on the maintenance and storage of equipment managed by the Norwegian Defence Logistics Organization. Blount Island Command officials stated that their annual quality assurance review does not focus on information systems, and they further noted that the Norwegians’ system is foreign-owned and therefore not within their jurisdiction. The Local Bilateral Agreement does not specifically require the Marine Corps to conduct a review of the Norwegian System. However, as a working document the agreement is regularly updated and can be amended to incorporate additional provisions such as allowing the Marine Corps to conduct a quality review of the Norwegian system. The Norwegian Equipment Information Management System provides the Marine Corps with capabilities not currently available in its own system, such as the ability to track equipment calibration and a warehousing function, but Norwegian Defence Logistics Organization officials stated that they recognized their system has certain vulnerabilities. For example, during our review of their information system we observed several weaknesses, including minimal documentation on the system, the lack of formal training and procedures for staff performing data entry, and the reliance on a single person, a retired Norwegian staff member, for the system’s technical programming and maintenance needs. To address known data entry problems, Norwegian officials are considering a proposal that would limit the number of data entry points into the Norwegian system from three locations to one centrally managed location. However, Norwegian officials stated that they had not conducted an overall quality assurance review of the information system, thus raising further questions as to its potential vulnerabilities. Although the current Local Bilateral Agreement does not contain guidance and instructions for conducting an assessment of the Norwegian Equipment Information Management System, the Marine Corps is responsible for ensuring that the Norwegian system is providing accurate data on the inventory for stored assets being managed at MCPP-N. Further, Marine Corps officials stated that they rely on the Norwegian system to carry out data management functions discussed above. Without performing a quality assurance review of the Norwegian system, MCPP-N is at risk for incurring potential vulnerabilities in its inventory data. If the Marine Corps does not provide a quality assurance review of the Norwegian system, it may not be able to determine whether inventory data are complete, accurate, reliable, and reasonably free from error, so as to ensure that equipment is readily available to support the combatant commanders’ requirements. The Marine Corps is transforming MCPP-N’s posture from an engineering and transportation capability to a balanced Marine Air Ground Task Force capability that supports both the U.S. European and U.S. Africa commands’ operational requirements to obtain prepositioned equipment sets capable of supporting crisis response operations and theater security cooperation activities. While the Marine Corps continues to develop cost estimates for its budget to determine the level of funding needed to meet current and future program obligations for MCPP-N, its current methods do not fully meet the four characteristics of a reliable cost estimate. The Marine Corps has taken some steps to improve its efforts in developing reliable cost estimates by drafting new guidance for subordinate and component commands to develop budget estimates. While this represents a positive step, without fully incorporating the four characteristics of a reliable cost estimate in their draft guidance, the Marine Corps cannot ensure that its budget planning efforts for MCPP-N are based upon sound planning that is justifiable, defendable, and accountable. Furthermore, the lack of a warehousing application in the Global Combat Support System - Marine Corps has limited Blount Island Command’s ability to provide adequate visibility and accountability over prepositioned inventory stored in Norway, and consequently the Marine Corps continues to rely on the Norwegian Equipment Information Management System to manage the warehousing and inventory of equipment. Without a quality assurance review that assesses the Norwegian system, the Marine Corps cannot ensure that the inventory data it provides are accurate and reliable. To better determine the costs needed to sustain the equipment to support a Marine Air Ground Task Force capability, we recommend that the Commandant of the Marine Corps direct the Deputy Commandant for Installations and Logistics to incorporate the four characteristics of reliable cost estimates in the Marine Corps’ forthcoming prepositioning programs budget development policy, and specifically to take the following actions: To ensure that estimates are accurate and well-documented, require all relevant departments and subordinate commands to provide documentation of cost-estimating details that include both source data and calculations; To ensure that estimates are credible, implement management requirements to establish and conduct formal cross-checks of major cost elements among the relevant departments and subordinate commands to determine whether they are replicable; and To ensure that estimates are comprehensive, implement a standardized structure for collecting all the necessary details used to develop and support cost estimates from all relevant departments and subordinate commands. As part of its quality assurance program for ensuring that the Marine Corps has accurate and reliable information on inventory data for stored assets used to support combatant commanders’ requirements, we recommend that the Commandant of the Marine Corps, in consultation with the Norwegian Defence Logistics Organization, take steps to update the Technical Manual on Logistics Support for the Marine Corps Prepositioning Program – Norway and the Local Bilateral Agreement, to incorporate guidance and instructions on conducting a quality assurance review that assesses the accuracy and reliability of the Norwegian Equipment Information Management System. We provided draft copies of this report to the Department of Defense and the Department of State. Additionally, we provided relevant portions of the draft report to the Norwegian Defence Logistics Organization to ensure its technical accuracy. In written comments for DOD on this draft, the Marine Corps agreed with all four of our recommendations and its comments are reprinted in their entirety in appendix II. The Department of State had no comments on the draft report. The Norwegian Defence Logistics Organization generally agreed with the relevant portions of the draft that we sent them and provided technical comments that we incorporated as appropriate. The Marine Corps concurred with our first, second, and third recommendations—that the Commandant of the Marine Corps direct the Deputy Commandant for Installations and Logistics to incorporate the four characteristics of reliable cost estimates in the Marine Corps’ forthcoming prepositioning programs budget development policy, and specifically take actions to ensure that estimates are accurate and well-documented, credible, and comprehensive. The Marine Corps stated that the forthcoming Prepositioning Programs Budget Development Order will address the four characteristics of reliable cost estimates to ensure that estimates are accurate, credible, and comprehensive, and that the draft Budget Development Order will be initially staffed to the prepositioning community at the end of fiscal year 2015, with a target date for publishing by the end of the 2nd quarter of fiscal year 2016. We believe that these actions, if fully implemented, would address our recommendations. The Marine Corps also concurred with our fourth recommendation—that the Commandant of the Marine Corps, in consultation with the Norwegian Defence Logistics Organization, take steps to update the Technical Manual on Logistics Support for the Marine Corps Prepositioning Program – Norway and the Local Bilateral Agreement, to incorporate guidance and instructions on conducting a quality assurance review that assesses the accuracy and reliability of the Norwegian Equipment Information Management System. The Marine Corps stated that it will incorporate guidance and instructions on conducting a quality assurance review that assesses the accuracy and reliability of the Norwegian Equipment Information Management System into the Technical Manual and Local Bilateral Agreement. The Marine Corps also stated that the use of the Norwegian system and the Change Log are not long-term solutions for the Marine Corps Prepositioning Program – Norway, and that as soon as the Global Combat Support System adds a warehousing module, currently under development, the Marine Corps will implement it in Norway. We acknowledge the current limitations of the Global Combat Support System in our report, and we believe that the Marine Corps’ proposed actions regarding efforts to include a quality assurance review of the accuracy and reliability of inventory data from the Norwegian system address the intent of our recommendations. We further believe that these actions, if fully implemented, should help improve the quality of inventory information until the warehousing module for the Marine Corps is in place. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of State, the Secretary of the Navy, the Commandant of the Marine Corps, and the Norwegian Defense Logistics Organization. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact Cary Russell at (202) 512-5431 (russellc@gao.gov). Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Senate Report 113-176, accompanying the National Defense Authorization Act for Fiscal Year 2015, included a provision that we review MCPP-N and determine the extent to which (1) MCPP-N addresses U.S. European Command and U.S. Africa Command requirements; (2) reliable cost estimates exist to fund MCPP-N’s sustainment of equipment to support a Marine Air Ground Task Force capability; and, (3) the Marine Corps has quality assurance procedures in place to monitor the management of MCPP-N. To determine the extent to which MCPP-N addresses U.S. European Command and U.S. Africa Command requirements, we obtained, reviewed, and analyzed plans, policies, and guidance on MCPP-N detailing the program and its support to the Marine Corps and combatant commands, such as the January 2012 Commandant of the Marine Corps Planning Guidance for Marine Corps Prepositioning Program—Norway. We also reviewed GAO’s prior work addressing DOD’s management and reporting of prepositioning. We collected and reviewed a theater posture plan and contingency plans obtained from the U.S. European Command and U.S. Africa Command on their strategic and operational requirements, including the need for a Marine Air Ground Task Force capability. We also collected documentation from the Marine Corps containing the type and mix of equipment required to support a Marine Air Ground Task Force. Further, we reviewed the combatant command plans to determine the extent to which they rely on prepositioned equipment to meet theater-specific requirements. We also collected and reviewed unit after action reports and briefings that provided an evaluation of the equipment obtained from MCPP-N for training and annual exercises, and to understand how the equipment met their needs. We collected documents from the Norwegian Armed Forces on Norway’s role and relationship with MCPP-N and visited several cave sites in Norway to observe U.S.-owned equipment stored in support of the Marine Air Ground Task Force. We focused our review on ground equipment stored at MCPP-N because the program is transforming the equipment set from an engineering and transportation to a Marine Air Ground Task Force capability. We met with and interviewed various DOD and other organizations that directly or indirectly support MCPP-N. Tables 5 and 6 include a list of the DOD and other organizations we met with and interviewed during our review. To determine the extent to which reliable cost estimates exist to fund MCPP-N’s sustainment of equipment to support a Marine Air Ground Task Force capability and to identify the process and steps used to develop the budget estimates, we collected and analyzed projected budget data and supporting budget documentation for MCPP-N from fiscal years 2015 through 2019. We obtained a copy of the Marine Corps’ program objective memorandum program review briefings from fiscal years 2015 through 2019 and conducted an analysis to determine how each cost element associated with budget estimate data was calculated by examining the basis of the budget estimates and assessing the strength and quality of the supporting budget documentation provided. We verified that the parameters used to create the budget estimates were valid and applicable by posing formal questions and conducting interviews with officials in the Deputy Commandant for Installation and Logistics, Logistics Plans and Operations Branch, to understand their methodology for developing budget estimates, and determining whether other sources were available for cross-checking those estimates. We verified that calculations were correct for each cost element, and verified that elements were accurately summed up to arrive at the overall budget estimate. We assessed whether the budget estimates were sufficiently reliable for our purposes and met GAO’s Cost Estimating and Assessment Guide for best practices, and the four general characteristics of a reliable cost estimate—accurate, credible, well- documented, and comprehensive. Each characteristic consists of several individual assessments. We assessed each characteristic by assigning each individual assessment a numerical rating: Not Met = 1, Minimally Met = 2, Partially Met =3, Substantially Met = 4, and Met = 5. We took the average of the individual assessment ratings to determine the overall rating for each of the four characteristics. The resulting average became the overall characteristic assessment as follows: Not Met = 1.0 to 1.4, Minimally Met = 1.5 to 2.4, Partially Met = 2.5 to 3.4, Substantially Met = 3.5 to 4.4, and Met = 4.5 to 5.0. A cost estimate is considered reliable if the overall assessment ratings for each of the four characteristics are substantially or fully met. If any of the characteristics are not met, are minimally met, or are partially met, the cost estimate does not fully reflect the characteristics of a reliable estimate. We recorded the results of our analysis and found that the budget estimates partially met the accurate, well-documented, and comprehensive characteristics, and that they did not meet the credible characteristic of a reliable estimate. Because the budget estimates did not meet all of the characteristics of a reliable cost estimate, we considered them not to be fully reliable. To determine the extent to which the Marine Corps has quality assurance procedures in place to monitor the management of MCPP-N, we reviewed the May 2009 United States Marine Corps Technical Manual on Logistics Support for MCPP-N, the 2013 Local Bilateral Agreement between Blount Island Command and Norwegian Defence Logistics Organization, and the 2012 Blount Island Command ISO 9001:2008 Quality System Manual. We obtained examples of annual quality assurance inspection and work instruction reports and analyzed the reports with the Marine Corps’ quality assurance procedures to determine how their reviews were conducted. We also collected studies, reports, and briefings on the Global Combat Support System – Marine Corps and the Norwegian Equipment Information Management System to determine how the Marine Corps and Norwegians rely on these two information management systems to maintain visibility and accountability over prepositioned equipment in Norway. We conducted a series of interviews with Marine Corps and Norwegian officials using a set of standard data reliability questions to learn about their general and application controls for conducting system operations and data processing; the chain of custody used to transfer and record data between two information management systems that do not interface with each other because of jurisdiction boundaries; and the quality assurance procedures used to assess the reliability of inventory data and systems. We interviewed officials from Blount Island Command and Norwegian Defence Logistics Organization to learn about the challenges they have encountered in using two information management systems to support MCPP-N and the management oversight they have used to mitigate deficiencies. In addition, we conducted site visits at the Frigard cave, Hammernesodden cave and pier, and the aviation maintenance facilities at the Vaernes airfield. During these site visits, we observed and photographed their storage and maintenance facilities; observed the procedures Norwegian staff followed to enter data into the Global Combat Support System - Marine Corps and the Norwegian Equipment Information Management System; observed their data reconciliation procedures; and observed the manual record keeping they used to supplement their data entry procedures. While on site, we obtained copies or photographs of some of their training and reference materials and data entry procedures. Marine Corps and Norwegian officials provided us with system demonstrations of the Global Combat Support System and Norwegian Equipment Information Management System to acclimate us to both systems’ data management features for tracking, recording, and storing data on prepositioned equipment. Finally, we interviewed officials from the Global Combat Support System Marine Corps’ Business System Integration Team to inquire about the Marine Corps’ plans to incorporate a warehousing application to allow Marine Corps organizations to collect inventory data. We conducted this performance audit from August 2014 to September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provided a reasonable basis for our findings and conclusions. In addition to the contact named above, Larry Junek (Assistant Director); Brian Bothwell; Patricia Farrell Donahue, Ph.D.; Latrealle Lee; Felicia Lopez; Amie Steele; Sabrina Streagle; John Van Schaik; Cheryl Weissman; Erik Wilkins-McKee; and Richard Winsor made key contributions to this report. Prepositioned Stocks: Additional Information and a Consistent Definition Would Make DOD’s Annual Report More Useful, GAO-15-570. Washington, D.C.: June 16, 2015. Prepositioned Stocks: DOD’s Strategic Policy and Implementation Plan. GAO-14-659R. Washington, D.C.: June 24, 2014. Prepositioned Stocks: Inconsistencies in DOD’s Annual Report Underscore the Need for Overarching Strategic Guidance and Joint Oversight, GAO-13-790. Washington, D.C.: September 26, 2013. Prepositioned Materiel and Equipment: DOD Would Benefit from Developing Strategic Guidance and Improving Joint Oversight. GAO-12-916R. Washington, D.C.: September 20, 2012. Defense Logistics: Department of Defense Has Enhanced Prepositioned Stock Management but Should Provide More Detailed Status Reports. GAO-11-852R. Washington, D.C.; September 30, 2011. Warfighter Support: Improved Joint Oversight and Reporting on DOD’s Prepositioning Programs May Increase Efficiencies. GAO-11-647. Washington, D.C.: May 16, 2011. Defense Logistics: Department of Defense’s Annual Report on the Status of Prepositioned Materiel and Equipment Can Be Further Enhanced to Better Inform Congress. GAO-10-172R. Washington, D.C.: November 4, 2009. Defense Logistics: Department of Defense’s Annual Report on the Status of Prepositioned Materiel and Equipment Can Be Enhanced to Better Inform Congress. GAO-09-147R. Washington, D.C.: December 15, 2008. Defense Logistics: Improved Oversight and Increased Coordination Needed to Ensure Viability of the Army’s Prepositioning Strategy. GAO-07-144. Washington, D.C.: February 15, 2007.
MCPP-N was established in 1981 as part of a DOD agreement to support the defense of Norway and global U.S. Marine Corps expeditionary operations. In 2012 the Marine Corps began transforming MCPP-N from an engineering and transportation capability to a Marine Air Ground Task Force capability, which includes combat vehicles and other tactical equipment, and it expects to complete the transformation in 2016. Senate Report 113-176 included a provision that GAO review MCPP-N. This report determines the extent to which (1) MCPP-N addresses U.S. European and U.S. Africa command requirements;(2) reliable cost estimates exist to fund MCPP-N's sustainment of equipment to support a Marine Air Ground Task Force capability; and (3) the Marine Corps has quality assurance procedures in place to monitor the management of MCPP-N. GAO reviewed agency guidance and plans, analyzed budget estimates, and interviewed Marine Corps, Department of State, and Norwegian Defence officials. The Marine Corps is changing its mix of equipment at Marine Corps Prepositioning Program – Norway (MCPP-N) to address the U.S. European and U.S. Africa commands' strategic and theater-specific operational requirements. U.S. European Command's posture plan identifies MCPP-N as a key program that can respond to contingencies. While U.S. Africa Command plans that refer to a need to access prepositioned equipment do not specifically identify MCPP-N as an asset to meet that need, both Marine Corps and U.S. Africa Command officials stated that MCPP-N has served and can continue to serve as a global support asset to meet combatant command requirements. The Marine Corps reported that it routinely uses MCPP-N equipment sets to support European and Africa training and exercises. Marine Corps cost estimates for sustaining the equipment to support a Marine Air Ground Task Force capability at MCPP-N may not be fully reliable, in that they do not fully meet the four general characteristics for reliable cost estimating—that is, being accurate, well-documented, credible, and comprehensive. For example, the Marine Corps documented its cost estimates, but the documentation did not include the source data used to develop the estimates or the calculations performed and estimating methodologies used. Marine Corps officials stated that they are drafting guidance for developing cost estimates for budget plans and plan to issue it in the fall of 2015, but this guidance will not address the four general characteristics for reliable cost estimating. Without ensuring that this guidance fully addresses those characteristics, the Marine Corps will not be positioned to know whether its budget proposals will meet the goal of sustaining equipment for a Marine Air Ground Task Force capability at MCCP-N. The Marine Corps could improve its quality assurance procedures for monitoring MCPP-N. Specifically, the service relies upon the Norwegian Equipment Information Management System for data needed to manage its equipment inventory due to limitations in its own system, such as the lack of a warehousing application to effectively manage MCPP-N equipment.The reliance on two different information systems, one of which is owned and operated by a foreign government, creates several management challenges and risks to data reliability for the Marine Corps. For example, it results in a time lag in the accuracy of information in the Marine Corps system until it is manually updated with information from the Norwegian system—a time-consuming process that introduces a vulnerability to errors. The Marine Corps and the Norwegians have taken some steps to mitigate these risks for the interim until the Marine Corps system is capable of replacing the Norwegian system. Additionally, relying on the Norwegian system for management information makes the Marine Corps vulnerable to any weaknesses that may exist within the Norwegian system. However, the Marine Corps has not conducted a quality assurance review of the Norwegian system. Performing such a review would constitute a key step toward mitigating potential weaknesses in the Norwegian system. GAO recommends that the Marine Corps (1) incorporate the four characteristics of reliable cost estimates in the forthcoming prepositioning programs budget development policy; and (2) develop, in consultation with the Norwegian Defence Logistics Organization, a means to conduct a quality assurance review of the Norwegian Equipment Information Management System. The Marine Corps concurred with the recommendations.
Many entities are involved in the production and distribution of television content to households, as shown in figure 1. Local television stations may acquire network content from the national broadcast networks that they are affiliated with, such as CBS; from syndicators for syndicated content, such as game shows and reruns; or from both. Stations also create their own content, including local news. Stations provide content to households directly through over the air transmission, which households can receive free of charge, and through retransmission by MVPDs, such as cable and satellite operators. Content producers, such as Sony and Disney, also distribute content through cable networks, such as ESPN, that are carried by MVPDs. “Over-the-top” providers, such as Netflix, provide content to consumers through Internet connections often provided by MVPDs. According to FCC, local television stations’ affiliation agreements with networks and contracts with syndicators generally grant a station the right to be the exclusive provider of that network’s or syndicator’s content in the station’s local market. Broadcasting industry stakeholders and economic theory note that exclusive territories can provide economic benefits to local television stations, broadcast networks, and viewers. Local television stations benefit from being the exclusive providers in their markets of high-demand network content, such as professional sports and primetime dramas. Being the exclusive provider supports stations’ viewership levels, which strengthens their revenues, allowing them to invest in the production of local content, among other things. For broadcast networks, exclusivity can help increase the value of each local station and create efficiencies in the distribution of network content. Thus, while exclusive territories reduce competition between some stations (e.g., local NBC stations in different geographic markets do not compete), the exclusive territories could provide incentives for stations to invest more heavily in the development of content and thus promote greater competition between stations in the same geographic market (e.g., local ABC and NBC stations in the same market compete), which can benefit viewers. FCC’s exclusivity rules are an administrative mechanism for local television stations to enforce their exclusive rights obtained through contracts with broadcast networks and syndicators. Network non-duplication. This rule protects a local television station’s right to be the exclusive provider of network content in its market. FCC promulgated the rule in 1966 to protect local television stations from competition from cable operators that might retransmit the signals of stations from distant markets. FCC was concerned that the ability of cable operators to import the signals of stations in distant markets into a local market was unfair to local television stations with exclusive contractual rights to air network content in their local market. The rule allows exclusivity within the area of geographic protection agreed to by the network and the station, so long as that region is within a radius of 35 miles—for large markets—or 55 miles—for small markets—from the station (see fig. 2). Syndicated exclusivity. This rule protects a local television station’s right to be the exclusive provider of syndicated content in its market. FCC first promulgated the rule in 1972 to protect local television stations and ensure the continued supply of content. This rule applies within an area of geographic protection agreed to by the syndicator and the station, so long as that region is within a 35-mile radius from the station. The exclusivity rules—when invoked by local television stations—require cable operators to block duplicative content carried on a distant signal imported into the station’s protected area by cable operators. For example, these rules allow WJZ, the CBS-affiliated local television station in Baltimore, to prohibit a cable operator from showing duplicative network content on another market’s CBS station that the cable operator imports into Baltimore. Similarly, the rules allow WJZ to prohibit a cable operator from showing any duplicated syndicated content on any other market’s station the cable operator imports into Baltimore. Local television stations are able to invoke the exclusivity rules regardless of whether their signals are retransmitted by a cable operator or not. For example, even if WJZ is not retransmitted by a particular cable operator in Baltimore, WJZ can invoke its exclusivity rights against that cable operator, requiring it to block duplicative content. FCC has statutory authority to administratively review complaints of violation of these rules (e.g., if a local television station believes a cable operator imported a distant signal into its market even though the station invoked its exclusivity protections) when such complaints are formally brought before the Commission. FCC officials said that the Commission addresses such complaints on a case-by-case basis. The broadcast industry is governed by a number of other rules and statutes that interplay with the exclusivity rules. These rules and laws include the following: Must carry. Must carry refers to the right of a local television station to require that cable operators that serve households in the station’s market retransmit its signal in that local market. The choice to invoke must carry is made every 3 years by stations. Cable operators carrying stations under the must-carry rule may not accept or request any fee in exchange for coverage. Retransmission consent. Retransmission consent refers to permission given by television stations who do not choose must carry to allow a cable or satellite operator to retransmit their signals. Stations invoke either retransmission consent or must carry. Retransmission consent was enacted in 1992; at the time, Congress determined that cable operators obtained great benefit from the broadcast signals that they were able to carry without broadcaster consent, which resulted in an effective subsidy to cable operators. Retransmission rights are negotiated directly between a local television station and cable and satellite operators. By opting for retransmission consent, stations give up the guarantee that cable and satellite operators will carry their signal under must carry in exchange for the right to negotiate compensation for their retransmission. Cable and satellite operators are unable to retransmit the signal of a local television station that has chosen retransmission consent without its permission. If, despite negotiations, a local television station and a cable or satellite operator do not reach agreement, the local television station may prohibit the cable or satellite operator from retransmitting its signal, commonly referred to as a “blackout.” FCC rules require local television stations and cable or satellite operators to negotiate for retransmission consent in “good faith.” FCC’s rules set a number of good faith standards, including a requirement that parties designate an individual with decision-making power to lead negotiations. Compulsory copyright. Must carry and retransmission consent pertain to the retransmission of a local television station’s signal. The content within that signal is protected by copyright. For example, the National Football League (NFL) holds the copyright for its games that are broadcast on CBS, Fox, and NBC. Generally, any potential user (other than the copyright holder) intending to transmit copyright protected content must obtain permission from the copyright holder beforehand. The compulsory copyright licenses, enacted in 1976, allow cable operators to retransmit all content on a local television station without negotiating with the copyright holders. To make use of the compulsory copyright, the cable operator must follow relevant FCC rules and pay royalties to the Copyright Office within the Library of Congress. The Copyright Act establishes the royalties that a cable operator must pay to carry television stations’ signals. A cable operator pays a minimum royalty fee regardless of the number of local or distant television station signals it carries, and the royalties for local signals are less than those for distant signals. Compensation for television content flows through industry participants in a number of ways that are relevant to the exclusivity rules, as seen in figure 3. Households that subscribe to television service with an MVPD pay subscription fees; FCC reported that the average monthly fee for expanded-basic service was $64.41 on January 1, 2013. Those MVPDs, including cable and satellite operators, pay retransmission consent fees to local television stations that opt for retransmission consent; as discussed above, the fees are determined in negotiations between stations and MVPDs. Advertisers purchase time from local television stations, broadcast networks, and MVPDs. Local television stations provide compensation to their affiliated national broadcast networks and to the providers of syndicated content in exchange for the rights to be the exclusive provider of that content in their market. Local television stations also use their advertising and retransmission consent revenues to develop their own content, including local news. In 2014, FCC issued a FNPRM to consider eliminating or modifying the exclusivity rules, in part to determine if the rules are still needed given changes to the video marketplace since the rules were first promulgated. FCC asked for comments on, among other things, the potential effects of eliminating the rules. In response to the FNPRM, FCC received 72 records during the open comment period, including letters from individuals, and comments and reply comments from industry stakeholders. FCC officials said that the Media Bureau is working on a recommendation for the FCC Chairman’s consideration on whether to repeal or modify the exclusivity rules; there is no firm timeframe for when the bureau may make a recommendation. All 13 broadcast industry stakeholders (local television stations, national broadcast networks, and relevant industry associations) we interviewed and whose comments to FCC we reviewed report that the exclusivity rules are needed to help protect stations’ exclusive contractual rights to air network and syndicated content in their markets. Those stakeholders reported that the rules provide an efficient enforcement mechanism to protect the exclusivity that local television stations negotiate for and obtain in agreements with networks and syndicators; in the absence of the rules, enforcement of exclusivity would have to take place in the courts, which would be difficult and inefficient for several reasons. These stakeholders report that if a local television station believes that a cable operator improperly imported duplicative content on a distant signal into its market, the station will be unable to bring legal action to stop the airing of this duplicative content. Specifically, the cable operator may have an agreement with a station in a distant market that allows it to retransmit that station’s signal in other markets. Since the affected local station might not have a contract with either the cable operator that is importing the distant station or the distant station, these stakeholders report that the local station cannot bring legal action. In 2012, for example, cable operator Time Warner Cable (TWC) did not reach a retransmission consent agreement with Hearst broadcast stations in five markets. TWC’s contract with another broadcaster, Nexstar, did not explicitly prohibit retransmission of Nexstar’s signals into distant markets, and TWC imported Nexstar stations into Hearst’s markets. However, according to one broadcast industry stakeholder, because of a lack of contractual relationship between Hearst and TWC regarding the retransmission of Nexstar’s signals, it would have been very difficult for Hearst to take a breach of contract action. Even if a local station could bring legal action, these broadcast industry stakeholders added that enforcing exclusivity through courts would be more time consuming and resource intensive than using FCC administrative review to determine or uphold exclusive rights that parties negotiated in contracts. Furthermore, all 13 broadcast industry stakeholders we interviewed and whose comments to FCC we reviewed report that exclusivity rules are needed to help protect stations’ revenues. These stakeholders report that because the rules protect the contractual exclusivity rights of local television stations, stations can maintain their bargaining position in retransmission consent negotiations with cable operators, allowing them to obtain what they consider to be fair retransmission consent fees based on the value of the content in their signal. If a local station does not grant a cable operator retransmission consent, the cable operator cannot provide any network or syndicated content that the station provides, including high-demand content. By contrast, if cable operators could import duplicative content on a distant signal, even on a temporary basis to avoid not showing national network content during a retransmission consent impasse, these stakeholders report that the bargaining position of local television stations will decline, with a commensurate decline in retransmission consent fees and the value of the local television station, as the station will no longer be the exclusive content provider. In addition, because the rules ensure that local television stations’ audiences are not reduced by the availability of duplicative content on signals from distant markets (for example, all households in a given market who watch popular NBC prime-time dramas will do so on their local NBC affiliate, as households are unable to do so on a NBC station from another market), they report that the rules help protect their audience share. This in turn, allows local television stations to obtain higher advertising revenues than they would if they were not the exclusive provider of network and syndicated content in their market. These broadcasting industry stakeholders also reported that by strengthening local stations’ revenues, the rules help them invest in developing and providing local news, emergency alerts, and community-oriented content, in support of FCC’s localism goals. However, the majority of cable industry stakeholders we interviewed and whose comments to FCC we reviewed reported that many local television stations have reduced their investments in local news in recent years despite the existence of the rules. In addition, we previously found that local television stations are increasingly sharing services, such as equipment and staff, for local news production. For example, stations can have arrangements wherein one station produces another station’s news content and also provides operational, administrative, and programming support. In addition, viewership for local news has declined in recent years—according to the Pew Research Center’s analysis of 2013 Nielsen data, the viewership for early evening newscasts had declined 12 percent since 2007. During this time, Americans have increasingly turned to other devices—such as computers and mobile devices—to access news on the Internet. For example, the Pew Research Center also reported in 2013 that 54 percent of Americans said they access news on mobile devices and 82 percent said they do on a desktop or laptop computer. Eight of 12 cable industry stakeholders we interviewed and whose comments to FCC we reviewed reported that because the rules help local television stations be the exclusive provider of network content in their market, the rules allow local television stations to demand increasingly higher retransmission consent fees from cable operators, which some said can lead to higher fees that households pay for cable television service. Because local television stations are the exclusive providers of network content in their markets (e.g., the NBC affiliate in San Diego is the only provider of popular NBC prime-time dramas in that market), cable operators report that they are forced to pay increasingly higher retransmission consent fees. They report that this occurs because if a local television station cannot reach agreement with the cable operator regarding retransmission consent and does not grant retransmission rights to the cable operator, the cable operator cannot import a signal from a distant market to provide network content and the cable operator’s subscribers lose access to network content. This puts the cable operator at risk for losing subscribers to competitors, such as other cable and satellite operators, who continue to carry the local television station and its network content. While 5 of 12 cable industry stakeholders we interviewed and whose comments to FCC we reviewed said that they prefer to retransmit the local station instead of a distant market station, they feel that the exclusivity rules limit their ability to seek alternatives if they are unable to agree to retransmission consent fees with a local station. Eight cable industry stakeholders reported that as a result, the rules have led to sharp and rapidly increasing retransmission consent fees in recent years—a trend that they expect to continue—which can lead to higher cable fees for households. SNL Kagan, a media research firm, has projected that retransmission consent fees will increase from $4.9 billion in 2014 to more than $9.3 billion in 2020. However, 4 of 13 broadcast industry stakeholders we interviewed and whose comments to FCC we reviewed stated that cable networks—such as ESPN, TBS, and AMC—also have exclusive distribution. For example, a cable operator wishing to carry ESPN can only obtain rights to do so from ESPN. Industry stakeholders we interviewed and whose comments to FCC we reviewed discussed different scenarios under which eliminating the exclusivity rules may lead to varying effects (see fig. 4). In one scenario, eliminating the exclusivity rules may provide cable operators with opportunities to import distant signals into local markets. This could potentially reduce the bargaining position of local television stations in retransmission consent negotiations, which could reduce station revenues with varying effects on the availability of content and households; however, the magnitude of these effects is uncertain. In two other scenarios, eliminating the exclusivity rules may have little effect as local television stations could maintain their position as the exclusive provider of network and syndicated content. As a result, retransmission consent negotiations may be unlikely to change, likely resulting in minimal effects on content and households. Eleven of 13 broadcast industry stakeholders we interviewed and whose comments to FCC we reviewed said that in the absence of the exclusivity rules, some local television station contracts with cable operators may allow for retransmission of their signals to distant markets. This may happen if contracts between local television stations and cable operators do not clearly prohibit retransmission outside of the stations’ local markets, as was the case in Nexstar’s contract with TWC discussed earlier. Two of these stakeholders said this could happen with small broadcasters that might lack the financial resources to cover legal counsel during their negotiations with cable operators. Broadcast networks could provide such assistance. However, officials from all three broadcast networks we interviewed told us that they currently do not oversee their affiliates’ retransmission consent agreements. In comments to FCC, one cable industry association suggested that FCC prohibit network involvement in the retransmission consent negotiations of their affiliates. Depending on how FCC interprets or amends its good-faith rules, broadcast networks may be unable to take a more active role in the retransmission consent negotiations between their affiliates and cable operators. Even if just one local television station allowed a cable operator to retransmit its signal outside its local market, the cable operator could retransmit that signal in any other market that it served; this could potential harm the exclusivity of local television stations affiliated with the same broadcast network in those markets served by the cable operator. The potential ability of a cable operator to import a distant signal, and the potential weakening of exclusivity that could result, may lead to a series of effects on the distribution of content—including local content—and on households and the fees they pay for cable television service (see fig. 5). The majority of both cable and broadcast industry stakeholders we interviewed and whose comments to FCC we reviewed stated that as a result of the potential of a cable operator retransmitting a distant station’s signal into a local market, local television stations may have reduced bargaining position during retransmission consent negotiations with cable operators. As stated earlier, the fact that local television stations are the exclusive provider in their markets of high-demand national content provides them with a strong bargaining position in negotiations with cable operators. However, if during retransmission consent negotiations, a cable operator can provide certain content by retransmitting the signal of a station affiliated with the same broadcast network in another market, the local station’s bargaining position declines because it is no longer the exclusive provider of the national network content available to the cable operator in the station’s market. This reduction in bargaining position may lead to fewer black outs and a reduction in retransmission consent fees. With the exclusivity rules in place, a local television station may be willing to pull its signal from a cable operator (that is, have a blackout) knowing that the cable operator has no alternative for providing high- demand network and syndicated content. However, without the rules, the local television station may be less willing to pull its signal from the cable operator, as the cable operator could provide the same high-demand content to its customers by importing a station from a distant market. For example, if a cable operator in Baltimore could import the Atlanta NBC affiliate into Baltimore when it does not reach a retransmission consent agreement with the Baltimore NBC affiliate, the Baltimore affiliate stands to gain little from pulling its signal, and thus not be retransmitted, since households served by the cable operator in Baltimore could still access NBC network content on the imported Atlanta station. With fewer blackouts, consumers would be less likely to lose access to broadcast network and syndicated content they demand. With reduced bargaining position, local television stations may agree to retransmission consent fees that are lower than they otherwise would be because local television stations want to avoid their signals being replaced by another television station’s signal from a distant market. This may mean that retransmission fees could decrease or increase at slower rate than they would if broadcasters maintained the same bargaining position they have now. For example, the NBC affiliate in Baltimore may be willing to accept lower retransmission consent fees from a cable operator knowing that the cable operator can import NBC content from another market if they did not reach agreement on retransmission consent. In addition, to the extent that a cable operator does import a distant signal into a given market, the local station in that market may lose some viewers who watch duplicative content on the imported station. To the extent this happens, advertisers may spend less on advertising time given the reduction in audience and the advertising revenues of the local television station may decline. The potential reduction of local stations’ retransmission consent and advertising revenues could affect the content stations can produce and distribute to households, including local content, in multiple ways, as described below. However, the nature of these effects is unknown. Local television stations may have fewer resources to pay in compensation to their affiliated broadcast networks. If so, the resulting reduction in revenues for national broadcast networks may reduce their ability to produce, obtain, and distribute high-cost and widely viewed content, such as national sports and primetime dramas. This potential outcome may result in the migration of some content to cable networks to the extent that cable networks outbid broadcast networks for this high-cost content (e.g., if ESPN outbids Fox for NFL coverage or more high-cost dramas are provided by the cable network AMC instead of broadcast networks). If this happens, consumers who rely on free over-the-air television and do not subscribe to cable television service may not be able to view certain content that has traditionally been available on over-the-air television unless they begin to subscribe to a cable operator’s service. Twelve of 13 broadcast industry stakeholders we interviewed and whose comments to FCC we reviewed said that local television stations may have fewer resources to invest in local content. This could reduce the quality or quantity of local content provided to viewing households. Nine of these stakeholders reported that local news is a major cost for local television stations. Local television stations may have fewer resources to pay for syndicated content. If so, syndicators could be less able to produce, obtain, and distribute syndicated content, which could affect the type and quantity of syndicated content that households are able to view. In addition to these potential changes in content, eliminating the exclusivity rules may affect the fees consumers pay for cable television service. However, because multiple factors may influence fees and the extent to which that happens is unknown, we cannot quantify the effect. To the extent that eliminating the exclusivity rules causes retransmission consent fees paid by cable operators to be lower than they otherwise would be, cable operators may pass some of these savings along to consumers in the form of lower subscription fees. However, as we have noted, eliminating the rules could lead to a migration of some highly viewed and high-cost content to cable networks from free over-the-air local television stations. This content migration could also affect fees for cable service; cable networks that obtain such content may experience additional costs for content and thus charge cable operators more to carry their networks. Thus, cable operator cost savings on retransmission consent fees could be offset to some extent by higher cable network fees. Furthermore, migration of such content could cause some households that do not subscribe to cable services to begin doing so, or cause some households to upgrade their service to obtain additional cable networks. This increased demand for cable service could also lead to some upward pressure on cable subscription fees. Eleven of 13 broadcast industry stakeholders we interviewed and whose comments to FCC we reviewed stated that in the absence of the exclusivity rules, the compulsory copyright license for distant signals may allow a cable operator to retransmit a local television stations’ signal into another market as the cable operator does not need to obtain approval from copyright holders. Nine of these 13 stakeholders stated that this compulsory copyright may not have been enacted if the exclusivity rules did not already exist. Six of these 13 stated that, as a result, if FCC eliminates the exclusivity rules, statutory changes would also be needed to eliminate the compulsory copyright license for distant signals. Assuming that retransmission of the content in a televisions station’s broadcast retains copyright protection, if Copyright Law was amended to remove the compulsory copyright for distant signals, a cable operator wishing to retransmit a station’s signal into a distant market would need to clear the copyrights with the copyright holders, such as the NFL, of all content included on the television station’s signal. However, we have previously found that obtaining the copyright holders’ permission for all this content would be challenging. Each television program may have multiple copyright holders, and rebroadcasting an entire day of content may require obtaining permission from hundreds of copyright holders. The transaction costs of doing so make this impractical for cable operators. Furthermore, as broadcast networks are also copyright holders for some content that their affiliated local television stations air, such as the network’s national news, they may be unwilling to grant such copyright licenses to cable operators wishing to retransmit that content on an distant signal, given networks’ interests in preserving their system of affiliate exclusivity, as discussed earlier. In such a scenario, cable operators may be unable to import distant signals and local television stations may not face the threat of duplicative network and syndicated content on a distant signal. Local television stations may retain the same bargaining position that they currently have during retransmission consent negotiations. As a result, there may not be any change in the likelihood of a blackout, retransmission consent fees, the quantity and quality of content, and fees for cable television service. Nine of 12 cable industry stakeholders we interviewed and whose comments to FCC we reviewed suggested that if the exclusivity rules were eliminated, there may be minimal effects as exclusivity would continue to exist in contracts. According to FCC, the affiliation agreements between local television stations and broadcast networks generally define exclusive territories for the affiliate stations and prohibit stations from granting retransmission consent outside their local markets. However, as we discussed earlier, only one local television station granting retransmission consent outside its local market to a cable operator could undermine the exclusivity of all the affiliates of a broadcast network in markets served by that cable operator. Broadcast industry stakeholders report that broadcast networks could take legal action against local television stations that violate terms of the affiliation agreements by granting retransmission consent outside their local market. However, two broadcast networks we interviewed said that they are reluctant to sue their affiliates because they prefer not to take legal action against their business partners; one added that such a suit could take a long time to be resolved. Depending on FCC’s interpretation of or amendment to its good-faith rules, local television stations and broadcast networks may be able to take actions to protect against stations’ granting retransmission consent outside their local markets, thereby protecting stations’ exclusive territories. Assuming FCC’s good faith rules permit such actions, broadcast networks may choose to take a more proactive role in their affiliates’ retransmission consent negotiations with cable operators. As we discuss earlier, networks have an incentive to maintain stations’ exclusive territories and potentially could provide input to stations’ retransmission consent negotiations to help prevent stations’ granting retransmission consent outside their local markets if that input is allowed under FCC’s interpretation of the good-faith rules. For example, if FCC found it permissible, networks potentially could provide suggested contract language that clearly limits retransmission by cable operators to the station’s local market. With contracts clearly protecting the exclusivity of local television stations and preventing cable operators from retransmitting signals to distant markets, cable operators may be unlikely to import distant signals as doing so would be a clear contractual violation of their retransmission consent contract. In this scenario, local television stations may retain their exclusivity and may not have any change to their bargaining position during retransmission consent negotiations. Therefore, stations’ retransmission consent fees and revenue, the quantity and quality of content, and cable subscription fees may not change. FCC’s exclusivity rules are part of a broader broadcasting industry legal and regulatory framework, including must carry, retransmission consent, and compulsory copyrights. The exclusivity rules predate many of these laws and rules, and in some instances, the development of these other laws was premised on the existence of the exclusivity rules. The effects of eliminating the exclusivity rules are uncertain, because the outcome depends on whether related laws and rules are changed and how industry participants respond. For example, if the compulsory copyright license for distant signals were eliminated, as some broadcast industry stakeholders suggest, removing the exclusivity rules may have little effect. In contrast, if FCC were to interpret good faith in its rules to limit the extent to which broadcast networks can influence retransmission consent negotiations between their affiliated stations and cable operators, as one cable industry association suggests, removing the exclusivity rules could lead to a series of events, the outcome of which could be a reduction in the quality or quantity of local content and potential changes in the fees households pay for cable television service. We provided a draft of this report to FCC for review and comment. FCC provided technical comments via email that we incorporated as appropriate. We are sending copies of this report to interested congressional committees and the Chairman of the FCC. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix III. The objectives of this report were to examine (1) industry stakeholder views on the need for and effects of the exclusivity rules and (2) the potential effects that removing the exclusivity rules may have on the production and distribution of content, including local news and community-oriented content. To address both objectives, we reviewed all public comments filed by industry stakeholders with the Federal Communications Commission (FCC) as part of its further notice of proposed rulemaking (FNPRM)— FCC docket 10-71—considering elimination or modification of the network non-duplication and syndicated exclusivity rules (exclusivity rules). We did not review comments filed by individuals and only reviewed those from industry stakeholders, such as local television stations or companies, multichannel video programming distributors (MVPD), including cable and satellite operators, national broadcast networks, industry associations representing such companies, and content copyright holders. In total, we reviewed 31 public comments. Of those 31 comments, 14 were from broadcasting industry stakeholders, 13 were from cable industry stakeholders, 1 was from a satellite industry stakeholder, 1 stakeholder was both a broadcaster and a cable operator, 1 was from a content provider, and 1 was from a related industry association. We reviewed these public comments for stakeholder views on the rules, the current effects of the rules, and the potential effects of eliminating the rules. In addition, we reviewed relevant rules and statutes, such as FCC’s exclusivity rules and relevant rulemaking documents, such as FCC’s FNPRM. We also reviewed affiliation agreements between broadcast networks and local television stations relevant to recent legal action regarding the exclusivity rules. We did not review retransmission consent agreements between local television stations and cable operators, however, as these agreements are not publicly available. We also conducted a literature review for studies related to FCC’s exclusivity rules, including any studies focused on the potential effects of eliminating the rules. To identify existing studies from peer-reviewed journals, we conducted searches of various databases, such as EconLit and ProQuest. We searched these and other databases using search terms including “exclusivity,” “network non-duplication,” and “syndicated exclusivity” and looked for publications in the past 5 years. We reviewed studies that resulted from our search and found that none of them were directly relevant to our work. We reviewed prior GAO reports that cover relevant issues, such as retransmission consent and copyrights. We also conducted semi-structured interviews with the industry stakeholders that filed public comments with FCC as part of its FNPRM considering eliminating or modifying the exclusivity rules. In some cases, multiple stakeholders co-signed and co-filed public comments; in these instances, we interviewed at least one of those stakeholders. While we attempted to interview at least one stakeholder for each of the 31 formal comments filed, four stakeholders did not respond to our requests for interviews. We interviewed 1 content provider, 13 broadcast industry stakeholders, 12 cable industry stakeholders, and 1 satellite industry stakeholder. During these interviews, we asked stakeholders about their views of FCC’s exclusivity rules, the effects of the rules, and the effects of potentially eliminating the rules on retransmission consent fees, broadcaster revenues, and the distribution of content, including locally- oriented content, among other things. In addition, we interviewed selected industry analysts who study the broadcasting and cable industries regarding the rules and the potential effects of eliminating the rules. We selected analysts to interview by identifying ones who analyze and make recommendations on the stocks of publicly traded companies that we interviewed as part of our review and whom we had interviewed as part of prior engagements. We also interviewed FCC officials regarding these rules and FCC’s rulemaking process. For our second objective, in addition to gathering information about industry stakeholder views on the potential effects of eliminating the exclusivity rules, we also analyzed those views in light of general economic principles to understand more fully the potential effects of eliminating the exclusivity rules. Mark L. Goldstein, (202) 512-2834 or goldsteinm@gao.gov. In addition to the contact above, Michael Clements, Assistant Director; Amy Abramowitz; Mya Dinh; Gerald Leverich; Josh Ormond; Amy Rosewarne; Matthew Rosenberg; and Elizabeth Wood made key contributions to this report.
Local television stations negotiate with content providers—including national broadcast networks, such as ABC—for the right to be the exclusive provider of content in their markets. FCC's network non-duplication and syndicated exclusivity rules (“exclusivity rules”) help protect these contractual rights. In 2014, FCC issued a further notice of proposed rulemaking (FNPRM) to consider eliminating or modifying the rules in part to determine if the rules are still needed given changes in recent years to the video marketplace. GAO was asked to review the exclusivity rules and the potential effects of eliminating them. This report examines (1) industry stakeholder views on the need for and effects of the exclusivity rules and (2) the potential effects that removing the exclusivity rules may have on the production and distribution of content, including local news and community-oriented content. GAO reviewed all 31 comments filed by industry stakeholders with FCC in response to its FNPRM. GAO also interviewed 27 of those industry stakeholders and FCC officials. GAO also analyzed—in light of general economic principles—stakeholder views on the potential effects of eliminating the rules. FCC reviewed a draft of this report and provided technical comments that GAO incorporated as appropriate. Broadcast industry stakeholders that GAO interviewed (including national broadcast networks, such as ABC, and local television stations) report that the exclusivity rules are needed to protect local television stations' contractual rights to be the exclusive providers of network content, such as primetime dramas, and syndicated content, such as game shows, in their markets. These stakeholders report that by protecting exclusivity, the rules support station revenues, including fees from cable operators paid in return for retransmitting (or providing) the stations to their subscribers (known as retransmission consent fees). Conversely, cable industry stakeholders report that the rules limit options for providing high-demand content, such as professional sports, to their subscribers by requiring them to do so by retransmitting the local stations in the markets they serve. As a result, these stakeholders report that the rules may lead to higher retransmission consent fees, which may increase the fees households pay for cable service. Based on GAO's analysis of industry stakeholder views, expressed in comments to the Federal Communications Commission (FCC) and interviews, eliminating the exclusivity rules may have varying effects. If the rules were eliminated and cable operators can provide television stations from other markets to their subscribers (or “import” a “distant station”), local stations may no longer be the exclusive providers of network and syndicated content in their markets. This situation could reduce stations' bargaining position when negotiating with cable operators for retransmission consent. As a result, stations may agree to lower retransmission consent fees. This potential reduction in revenues could reduce stations' investments in content, including local news and community-oriented content; the fees households pay for cable television service may also be affected. Because multiple factors may influence investment in content and fees, GAO cannot quantify these effects. If the rules were eliminated, other federal and industry actions could limit cable operators' ability to import distant stations. For example, if copyright law was amended in certain ways, cable operators could face challenges importing distant stations. A cable operator could be required to secure approval from all copyright holders (such as the National Football League) whose content appears on a distant station the cable operator wants to import; with possibly hundreds of copyright holders in a day's programming, the transaction costs would make it unlikely that a cable operator would import a distant station. Also, broadcast networks may be able to provide oversight of retransmission consent agreements if FCC rules were to allow it. Cable operators may only import distant stations if retransmission consent agreements with those stations permit it, and stations' agreements with broadcast networks generally prohibit stations from granting such retransmission. If FCC rules allowed it, broadcast networks could provide oversight to help ensure such agreements do not grant retransmission outside the stations' local markets. Under these two scenarios, local stations may remain the exclusive providers of content in their markets, their bargaining position may remain unchanged, and there may be limited effects on content and fees for cable service.
HUD is the principal government agency responsible for programs dealing with housing, community development, and fair housing opportunities. HUD’s missions include making housing affordable by providing mortgage insurance for multifamily housing, providing rental assistance for about 4.5 million lower-income residents, helping revitalize over 4,000 localities through community development programs, and encouraging homeownership by providing mortgage insurance. HUD is one of the nation’s largest financial institutions, responsible for managing more than $454 billion in mortgage insurance and $531 billion, as of September 30, 1997, in guarantees of mortgage-backed securities. The agency’s budget authority for fiscal year 1998 is about $24 billion. HUD has initiated a number of reform and downsizing efforts in the 1990s . In February 1993, then-Secretary Cisneros initiated a “reinvention” process in which task forces were established to review and refocus HUD’s mission and identify improvements in the delivery of program services. HUD also took measures in response to the National Performance Review’s September 1993 report, which recommended that HUD eliminate its regional offices, realign and consolidate its field office structure, and reduce its field workforce by 1,500 by the close of fiscal year 1999. Following a July 1994 report by the National Academy of Public Administration that criticized HUD’s performance and capabilities, Secretary Cisneros issued a reinvention proposal in December 1994 that called for major reforms, including a consolidation and streamlining of HUD’s programs coupled with a reduction in staff to about 7,500 by the year 2000. Building upon the earlier reinvention efforts, Secretary Cuomo initiated the 2020 planning process in early 1997 to address, among other things, HUD’s downsizing goals and management deficiencies. The Congress enacted the Government Performance and Results Act of 1993 in conjunction with the Chief Financial Officers Act and information technology reform legislation to help instill performance-based management in the federal government. The Results Act seeks to shift the focus of government decisionmaking and accountability away from a preoccupation with the activities—such as grants and inspections made—to a focus on the results—such as the real gains in employability, safety, responsiveness, or program quality. Under the act, agencies are to develop strategic plans, annual performance plans, and annual performance reports. The HUD scandals of the late 1980s served to focus a great deal of public attention on the management problems at HUD. We designated HUD as a high-risk area because of four long-standing, departmentwide management problems. First, internal control weaknesses, such as a lack of necessary data and management processes, were a major factor leading to the scandals. Second, poorly integrated, ineffective, and generally unreliable information and financial management systems did not meet program managers’ needs and weakened their ability to provide management control over housing and community development programs. Third, HUD had organizational problems, such as overlapping and ill-defined responsibilities and authorities between HUD headquarters and field organizations and a fundamental lack of management accountability and responsibility. Finally, an insufficient mix of staff with the proper skills hampered the effective monitoring and oversight of HUD’s programs and the timely updating of procedures. We have testified before this Subcommittee on specific major management challenges facing HUD that are illustrative of these four deficiencies discussed above. In February 1997, we reported that HUD had made some progress in addressing these problems. Specifically, we reported that HUD: had made limited progress in addressing internal control weaknesses by implementing a new management planning and control program intended to identify and rank the major risks in each program and devise strategies to abate those risks, and had reduced its material weaknesses identified under the FMFIA assessment from 51 in the early 1990s to 9. At the same time, we noted that the remaining material weaknesses were long-standing and involved large sums of money, and that financial audits had continued to identify material internal control weaknesses in HUD’s programs. We also found that managers were not actively assessing risks in their programs as required under the management control program. Finally, despite its importance as a management tool, HUD’s monitoring of program participants continued to be a problem area. continued to make progress in improving its information and financial management systems but much work remained: Some of the projects would not be completed until the year 2000. In addition, we noted that HUD reported that most of its systems did not comply with the FMFIA and therefore could not be relied upon to provide timely, accurate, and reliable financial information and reports to management. had completed a field reorganization that eliminated its regional office structure and transferred direct authority for staff and resources to the Assistant Secretaries, and was planning additional reorganization efforts. Although HUD had not evaluated the effects of its reorganization, most field directors we surveyed rated it successful overall and believed that the reorganization had achieved most of the intended goals—namely, eliminating previously confused lines of authority within programs, enhancing communications, reducing levels of review and approval, and improving customer service. had made some progress in addressing the problems with staff members’ skills and with resource management. The Department had increased staff training since our 1995 report and begun to implement a needs assessment process to plan future training. We noted that HUD directors we surveyed generally believed that the skills of their staff had improved over the previous 2 years; however, 40 percent of the directors rated the Department’s training as less than good. In addition, we and HUD’s Inspector General continued to identify staff resource problems in HUD’s major program areas, specifically in public housing and the Federal Housing Administration (FHA). Finally, we reported that the problem of inadequate staff resources to monitor and administer HUD’s current array of programs likely would be compounded as the Department implemented plans to downsize. Our February 1997 report concluded that HUD programs continued to pose a high risk to the government in terms of their vulnerability to waste, fraud, abuse, and mismanagement; that HUD needed to complete its corrective actions; and that HUD and the Congress needed to work together to implement a restructuring strategy that focuses HUD’s mission and consolidates, reengineers, or reduces HUD’s programs to bring its responsibilities in line with its management capacity. In its March 1998 report on the audit of the agency’s fiscal year 1997 consolidated financial statements, HUD’s Inspector General reported that management problems continue. For example, the report identified seven material internal control weaknesses, including the agency’s failure to establish a control structure that provided reasonable assurance that $18 billion in rental subsidies are based upon tenants’ correct incomes. Other material weaknesses included the needs for HUD to upgrade its financial management systems; FHA to improve its accounting and financial management systems; HUD to improve the management of its resources, which affects the Department’s ability to monitor program recipients and contractors; and HUD to improve its monitoring of multifamily projects. Our work has also shown a continuing need for improvement. For example: Our recent report on HUD’s tenant-based Section 8 assistance program illustrates the need for further improvement in financial management. We found that flaws in HUD’s budget process, including double-counting of administrative fees that are paid to housing agencies for operating the Section 8 program and insufficient use of supporting historical data, led to significant overestimates of contract renewal needs. Recognizing these inaccuracies, HUD submitted a revised budget estimate that was $1 billion lower than its original estimate. The agency agreed with our recommendations for improvements in this area. Similarly, in our on-going review of the project-based Section 8 program, we found errors in the analyses the Department uses to support its requests for funding to amend Section 8 contracts that do not have sufficient funding. As we discussed in recent testimony on HUD’s fiscal year 1999 budget request, these errors contributed to HUD substantially overestimating the funding needed to amend Section 8 project-based contracts in fiscal year 1999. The errors included omitting relevant Section 8 funding and contracts. We are continuing to work with HUD to ensure that these errors are corrected. We recently testified that HUD faces Year 2000 risks with its automated systems (the possibility that systems that represent the year using two digits rather than four will generate incorrect results beyond 1999). System failures could interrupt the processing of applications for mortgage insurance, the payment of mortgage insurance claims, and the payment of rental assistance. According to HUD’s schedule for the 30 mission-critical systems undergoing renovation, testing, and certification or where renovation has not yet begun, all of these actions will be completed by December 31 of this year. However, at the time of our testimony HUD was behind schedule on 20 of these 30 mission-critical systems, with 13 of the 20 experiencing delays of 2 months or more. Furthermore, HUD reported that 5 of these 13 have “failure dates”—the first date that a system will fail to recognize and process dates correctly—between August 1, 1998 and January 1, 1999. To better ensure the completion of work on mission-critical systems, HUD officials decided to halt routine maintenance on five of its largest systems, beginning April 1. Under the HUD 2020 Management Reform Plan and related efforts, HUD is in the process of making significant changes that will affect most aspects of its operations, including the long-standing management problems and issues facing the agency. The plan calls for reducing the number of programs, reducing staffing levels, retraining the majority of the staff, reorganizing the 81 field offices, consolidating processes and functions within and across program areas into specialized centers, and modernizing and integrating the financial and management information systems. As we stated in our March 1998 report, the plan is directed in part towards correcting the management deficiencies that we and others have identified. However, because the reforms are not yet complete and some of the plan’s approaches are untested, the extent to which they will result in the intended benefits is unknown. The following sections discuss how HUD’s reform efforts address weaknesses we have identified with the Department’s internal controls, financial and information management systems, organizational structure, and staffing. A strong internal control system provides the framework for the accomplishment of management objectives, accurate financial reporting, and compliance with laws and regulations. Effective internal controls serve as checks and balances against undesired actions such as fraud, thereby providing reasonable assurance that resources are effectively managed and accounted for. HUD’s 2020 Management Reform Plan calls for a number of actions that if effectively implemented could help to address internal control weaknesses, including the need for more monitoring. These actions include (1) implementing a new financial integrity program, under which program managers will be held accountable for financial management; (2) establishing a risk management office within the Office of Chief Financial Officer to integrate risk management as a day-to-day operations in program offices; (3) improving financial management systems; (4) establishing a real estate management assessment center to perform physical and financial assessments of the multifamily inventory and public housing authorities; and (5) establishing a consolidated enforcement center responsible for investigating and taking enforcement actions against organizations administering HUD funds, such as public housing authorities, communities, and multifamily project owners who do not comply with the programs they administer. In reporting on HUD’s consolidated financial statements for fiscal year 1997, the Inspector General stated that to improve its internal control environment HUD needed to be successful in completing efforts to upgrade its financial management systems, correct resource management shortcomings, address weaknesses with its management control program, and improve program performance measures. The Inspector General also stated that the management integrity program—implemented under the HUD 2020 reform effort—was soundly conceived but that it was too early to evaluate how effective the program would be. HUD’s Office of Risk Management had not become operational until the second quarter of fiscal year 1998. The report also noted that HUD’s success in addressing the longstanding monitoring deficiency is dependent upon a concept for standardizing inspections of multifamily projects and public housing authorities that had not been tested. In April, HUD officials told us they were in the process of testing the physical assessment procedures and expected to test the financial assessment procedures within 6 months. Finally, although the HUD 2020 Management Reform Plan did not specifically address the internal control weakness relating to verifying tenants’ incomes under HUD’s rental assistance programs, the agency has begun implementing some actions under the reform effort, according to the Inspector General’s report. HUD relies extensively on information and financial management systems to manage its programs. The 2020 plan calls for HUD to modernize and integrate outdated financial management information systems with an efficient state-of-the-art system, incorporating such features as efficient data entry, support for budget formulation and execution, updates on the status of funds, standardized data for quality control, and security control. The plan also states that information and accounting systems that do not comply with FMFIA would be overhauled to correct deficiencies, their functions would be consolidated into the new accounting systems, or they would be eliminated. HUD’s project to modernize and integrate its financial management systems has been ongoing for 6 years, and was revised to support the 2020 plan. The revised project plan calls for the consolidation of four general ledger systems into a core accounting system; an executive information system; and Communities 2020, a mapping software that will show the impact of HUD’s funding activities in local communities. Recently, the Department decided to forgo purchasing a new software package to integrate its financial systems; instead, it will continue to implement the Federal Financial Systems software, which it began using in 1995. The Department plans to complete the systems integration project by September 1999 and has separated it into two phases. In the first phase, HUD will implement the Federal Financial Systems software as its consolidated general ledger and the FHA’s general ledger by September 30, 1998. In the second phase, by September 30, 1999, HUD will fully implement the software as its core accounting system and integrate it with program information systems that contain financial data. In addition, in February 1998, HUD completed a departmentwide effort to evaluate whether its systems conform to FMFIA requirements and OMB circular A-127, and it reported that 38 of its 92 systems were nonconforming systems (HUD had previously reported that 85 were not in compliance). The Inspector General’s March 1998 report pointed out, however, that 21 of the 31 systems that HUD reclassified as complying did not have detailed assessments and justifications available as required by HUD’s Chief Financial Officer. The 2020 Management Reform Plan calls for reorganizing field resources by functions, rather than program “cylinders,” and consolidating or centralizing functions. For example: HUD is consolidating single-family housing insurance operations—previously carried out in 81 field offices—in four homeownership centers, and is consolidating certain multifamily housing development and management functions—previously located in more than 50 field offices—into 18 hub offices. The Office of Public Housing is consolidating some of its functions—previously performed in 52 public housing offices—into 27 hub offices and 16 program centers; centralizing the management of competitive grants and public housing operating and capital funds into one Grants Center; centralizing applications for demolition/disposition, designated housing plans and homeownership plans into one Special Applications Center; and centralizing activities to improve the performance of troubled public housing authorities into two Troubled Agency Recovery Centers. The Office of Fair Housing and Equal Opportunity is consolidating program compliance monitoring and enforcement functions within its existing field structure of 48 offices into 10 hubs, 9 project centers and 23 program offices. In addition, HUD is establishing three nationwide centers to consolidate across programs payments for rental assistance, physical and financial assessments of real estate, and enforcement functions. The budget and chief financial officer’s functions are being consolidated and accounting operations are being consolidated from 10 divisions into one center. HUD expects to improve both the efficiency and effectiveness of its operations through these organizational changes. Specific expected benefits include (1) reducing the time for endorsements for single-family housing insurance and development applications for multifamily housing; (2) reducing paperwork requirements for grant programs; (3) greater financial management accountability, since budgetary and financial responsibilities are centralized; (4) improving HUD’s ability to manage public and assisted housing portfolios though the operations of the assessment center; and (5) improving HUD’s ability to enforce contractual requirements with private owners, public housing authorities, and other HUD clients. As we noted in our March 1998 report, HUD’s anticipated benefits from these organizational changes are generally not based upon detailed empirical analyses or studies; but rather on a variety of factors, including some workload data, limited results of one pilot project, identified best practices in HUD field offices, benchmarks from other organizations, and managers’ and staffs’ experience and judgment. We concluded that because the reforms are not yet complete and some of the approaches are untested, the extent to which they will result in the intended benefits is unknown. We believe it is too early to judge the effectiveness of HUD’s organizational changes. It will be some time before the proposed reforms are completely implemented, any operational problems reveal themselves, and corrections are made. However, we note that the Inspector General’s December 1997 report raised concerns about organizational structure, similar to those highlighted in our high-risk report, relating to the Office of Public and Indian Housing reorganization. The Inspector General stated that the structure and operating plans for overseeing programs and housing authorities may be difficult to implement because they provide for assigning staff authority and responsibilities in a fragmented and overlapping manner. Assurance that HUD has the right number of staff with the proper skills has been an issue of concern to us, the Inspector General, and others for a number of years. The HUD 2020 Management Reform Plan—in addition to its basic goal of reducing staffing to 7,500—has several proposals that affect staff resource capacity. For example, the plan calls for: refocusing and retraining HUD’s workforce, consolidating and/or eliminating more than 300 programs into 70, deregulating well-operating public housing authorities, and replacing the current field structure with one that consolidates functions within and across program areas. The plan also calls for implementing a resource estimation process that, according to HUD, will be a disciplined and analytical approach to identify, justify, and integrate resource requirements and budget allocations. In commenting on a draft of our March 1998 report, HUD’s Acting Deputy Secretary stated that the Department plans to achieve its downsizing goal of 7,500 full-time employees by 2002 in two phases. During the first phase, HUD has reduced staff to approximately 9,000 employees. According to the Acting Deputy Secretary, HUD now plans to continue downsizing to 7,500 by 2002—the second phase—only if (1) the Congress enacts legislation to consolidate HUD’s program structure and (2) there has been a substantial reduction in the number of troubled multifamily assisted properties and troubled public housing authorities. Several interrelated issues are particularly important for achieving the intended benefits of HUD’s management reform efforts: (1) HUD’s ability to meet planned timetables for implementing key reforms, (2) the adequacy of staffing during and after the transition to the “new HUD,” (3) the Department’s ability to reduce the numbers of troubled public housing authorities and troubled multifamily projects, and (4) HUD’s ability to effectively improve its procurement and contracting practices, including its oversight of contractors. It will also be important for HUD, as it implements the reforms, to assess the extent to which the reforms are achieving the desired outcomes, which will depend on both its capacity to carry out the reforms and their sustainability under changing leadership. Because of the sheer scope of HUD’s management reform efforts, the Department has a large number of actions underway simultaneously—at a time when it has just downsized by nearly 10 percent . The Department plans to have its reorganization completed by September 30, 1998, including the establishment of the new consolidated functional centers. These changes, in turn, require other efforts, such as developing operating procedures and selecting and training staff, that must be completed in order to implement the planned reforms. One area in which it may be difficult for HUD to meet targeted timeframes relates to the “mark-to-market” change as described in the 2020 plan. Specifically, the 2020 plan described HUD’s intention to reduce excessive rent subsidies to market levels for assisted housing, noting that roughly 65 percent of HUD’s insured Section 8 multifamily portfolio (the portfolio of multifamily properties with both project-based rent subsidies and HUD-insured mortgages) have rents that are substantially above market levels. On October 27, 1997, the Congress enacted legislation to, among other things, reduce the long-term costs of project-based rental assistance and encourage project owners to restructure their FHA-insured mortgages and project-based assistance contracts before their contracts expire. HUD officials responsible for mark-to-market operations are currently taking steps to begin implementing the mark-to-market program by the mandated date of October 27, 1998. These steps include developing a management infrastructure, drafting interim and final regulations for the program, pursuing an Internal Revenue Service ruling on debt restructuring, and beginning the solicitation process for the third parties who will be responsible for actually restructuring the HUD-insured mortgages and rental assistance. However, according to HUD mark-to-market officials, HUD lacks the in-house capability to complete some other tasks that are essential to effectively implementing the mark-to-market program. These tasks include providing HUD staff and third-party partners with operating manuals, developing an organizational structure, assessing and revising information systems, providing briefings for HUD staff and third party partners regarding operating procedures, and developing budget analyses. HUD intends to obtain the capacity to complete such tasks through a task order under an existing management studies contract; however, the award of the task order has been delayed. Another area in which HUD may have difficulty achieving its intended schedule for implementing changes is in developing alternatives to its property disposition process. To address poorly controlled and monitored disposition of single-family properties, HUD plans to privatize or contract out most property disposition activities. Specifically, according to officials in HUD’s Single-Family Housing Division, the Department plans to sell the rights to properties before they enter HUD’s inventory, thus enabling quick disposition once the properties become available. However, many of the details of these sales, which HUD refers to as “privatization sales,” remain to be developed. In addition, HUD has proposed legislation to allow the Department to take back notes when claims are paid, rather than requiring lenders to foreclose and convey properties. HUD would then transfer the notes to a third party for servicing and/or disposition. Since HUD has not fully developed plans for these alternative methods of property disposition, its schedule for implementing changes may be delayed. For example, according to single-family property disposition officials, HUD expected to publish a proposed rule amending the current property disposition regulations in about March 1998, have a financial adviser hired by April 1998, conduct the first privatization sale in the summer of 1998, and publish the final rule amending the current regulations by September 1998. However, as of April 27, 1998, these steps had not been completed and HUD was unable to estimate when the events might occur. Our work, and that of the Inspector General, has identified problems with the adequacy of staff training and with the means of determining staffing resource needs. The 2020 Management Reform Plan, which incorporates the continued downsizing at HUD and the assignment of many staff to new duties, heightens the importance of both of these issues. Our February 1997 report noted that HUD had taken steps to increase the effectiveness of its staff training by, among other things, beginning to implement a needs assessment process for future training, forming partnerships with colleges and universities to create new educational opportunities, and substantially increasing expenditures for training. HUD’s field program directors that we surveyed for the report indicated that these efforts may have produced positive effects—for example, about 85 percent of the directors said that the skills of their staff had improved at least somewhat during the preceding 2 years—but that pockets of problems remained. More than one-fourth of the directors at that time were not satisfied with their staffs’ knowledge of new regulations or with their staffs’ interpersonal skills; 42 percent were not satisfied with their staffs’ knowledge of information systems. The need for staff training may be even more critical with the advent of HUD 2020, because over 1,000 employees have left the Department and HUD has reassigned some 1,300 employees. (Many more employees may be reassigned through the merit staffing of about 700 positions, which HUD initiated in April and expects to complete by June.) Cumulatively, this represents a significant loss of staff expertise. HUD’s Inspector General reported in December 1997 that “any of HUD’s technical staff experts and mid- and senior-level managers have already left the Department, taking with them vast institutional knowledge and program expertise that cannot be easily replaced. “ To cope with the reforms and the attendant personnel and operational changes, HUD has laid out an ambitious training program. For example, HUD plans to begin comprehensive training for all personnel assigned to the Section 8 Financial Management Center beginning June 1, 1998, and to train HUD staff on new tools and technology for physical inspections of properties (26 HUD inspectors have already been trained). HUD has also developed a training agenda and tentative schedule for staff of the new Enforcement Center and for Housing’s quality assurance staff; this training is to include input from the Inspector General’s office. In addition, according to the 2020 plan, HUD intends to create training programs for its new community resource representatives and public trust officers, including specialized training at universities beginning in the fall of 1998. Much attention has been focused on the origin and rationale for the downsizing targets in the 2020 management reform plan. We believe that there are two issues here that deserve the Congress’s and HUD’s attention. The first is ensuring that HUD has an adequate number of staff to carry out vital functions during the transition to its new organizational structure. The second is developing a systematic means of determining staff resource needs that can accommodate future organizational changes. HUD’s 2020 Management Reform Plan, when announced in June 1997, projected a target staffing level of 7,500 (on a full-time equivalent basis) by the year 2000, subsequently extended to 2002. However, more recently HUD has indicated that it may need more staff. A staff summary provided at HUD’s briefing for us on April 17 shows an authorized staffing level of 7,826 under the reform plan. (A report by Booz-Allen & Hamilton, Inc., reported that HUD’s projected staffing levels increased from 7,500 to 7,826 due to field management, the Enforcement Center, and the Assessment Center.) According to HUD’s acting Deputy Secretary, this staffing level is likely to be needed even if the Congress enacts legislation consolidating programs; and it does not include any new responsibilities that may be imposed on HUD. Our March report on the 2020 Management Reform Plan found that HUD’s target staffing levels were not based upon a systematic analysis of needs. While HUD used historical workload data to apportion or allocate predetermined target numbers of staff among different locations or functions, it did not use a systematic analysis directed at determining how many staff are needed to carry out a given responsibility or function. Our finding is consistent with that of HUD’s Inspector General, who reported that the Department’s target of 7,500 staff was adopted without first performing a detailed analysis of HUD’s mission and projected workload. In its annual performance plan for fiscal year 1999, HUD noted that departmental systems for measuring work and reporting time are no longer available and that it lacks a single, integrated system to support resource allocation. The 2020 Management Reform Plan calls for HUD to implement a proposed resource estimation and allocation process. HUD intends to work with the National Academy of Public Administration to develop a methodology or approach for resource management that will allow the Department to identify and justify its resource requirements. According to the Academy, the resource estimation elements will include workload factors and analysis based on quantifiable estimates of work requirements for planning, developing, and operating current and proposed programs, priority initiatives, and functions. The methodology is also to enable HUD to estimate resources for its budget formulation and execution and to link resources to performance measures. While HUD plans to have certain of its structural reforms—such as the new Troubled Agency Recovery Centers and its Enforcement Center—in place by early fall of this year, reducing the numbers of troubled public housing authorities and multifamily projects partly depends on successfully implementing other elements of the 2020 reform plan, including some that require legislation. For example, the 2020 plan includes a legislative proposal to reform bankruptcy laws to prevent owners from using them as a refuge from enforcement actions. HUD is currently responsible for overseeing about 54 troubled public housing authorities . The 2020 reform plan proposes to revise the existing program for assessing the management of public housing, as well as incorporate information from physical inspections, audits, and evaluations of community and residents’ satisfaction, to provide a comprehensive annual assessment. The Inspector General reported in December that, according to HUD officials, this effort could increase the number of housing authorities defined as troubled to more than 500, depending on the scoring system used. Furthermore, the 2020 plan calls for mandating a judicial receivership for any large housing authority that remains on the troubled list for more than 1 year. According to the plan, this action would require legislation. The Inspector General noted that HUD might have to deal with a large number of receivership actions with a downsized staff. Reducing the number of troubled multifamily properties—which HUD estimates to number about 5,400 —could also prove difficult. HUD’s revised processes for identifying and dealing with troubled properties are not yet fully developed, and the respective roles that multifamily project managers—located in field offices—and the enforcement center will play in taking actions on troubled multifamily projects are not yet clear. The 2020 plan noted that HUD lacked an efficient system to identify, assess, and respond to troubled properties, and stated that the Department-wide enforcement authority would handle troubled properties. However, according to information provided to us in the April briefing, HUD has created quality assurance divisions in its multifamily field structure and created senior project manager positions to handle severely troubled projects. It is also developing an automated tool to provide information on all conditions and activities of multifamily projects; the plans include access to the Assessment Center’s physical inspection and financial data. While the Department has reduced staff workload by transferring some responsibilities, according to HUD officials, it has not yet achieved the workload ratios (nontroubled projects per asset manager) anticipated by the 2020 plan. Part of the rationale for this workload realignment is to prevent additional projects from becoming troubled. If the physical and financial assessments of properties indicate that more properties are troubled than currently estimated, HUD may need more staff and/or time to reduce the number of troubled properties. HUD awards millions of dollars in contracts each year. The 2020 Management Reform Plan calls for HUD to contract with private firms for a number of functions, including physical building inspections of public housing and multifamily insured projects; legal, investigative, audit, and engineering services for the Enforcement Center; and activities to clean up the backlog of troubled assisted multifamily properties . The plan also encompasses the potential use of contractors to help dispose of single-family properties and to manage construction in the HOPE VI program. The Department—with fewer staff—will be responsible for ensuring that agency needs are accurately reflected in contract specifications and that contracts are fairly awarded and properly administered. Inadequate contracting practices leave HUD vulnerable to waste and abuse. We and the Inspector General have identified weaknesses in HUD’s procurement systems, needs assessment and planning functions, and oversight of contractor performance. For example: HUD’s ability to manage contracts has been limited because its procurement systems did not always contain accurate critical information regarding contract awards and modifications and their associated costs. Although HUD recently combined several of its procurement systems, the new system is not integrated with HUD’s financial systems, limiting the data available to manage the Department’s contracts. Inadequate oversight of contractor performance has resulted in HUD’s paying millions of dollars for services without determining the adequacy of the services provided. HUD staff have often not been trained or evaluated on their ability to manage the contracts for which they have oversight responsibility and have not always maintained adequate documentation of their reviews of contractors. This situation limits assurance that adequate monitoring has occurred. For example, we recently reported that HUD did not have an adequate system in place to assess its field offices’ oversight of real estate asset management contractors, who are responsible for safeguarding foreclosed FHA properties. The three HUD field offices we visited varied greatly in their efforts to monitor the performance of these real estate asset management contractors, and none of the offices adequately performed all of the functions needed to ensure that the contractors meet their contractual obligations to maintain and protect HUD-owned properties. HUD has recognized the need to improve its procurement processes and has begun taking actions to address weaknesses that we and the Inspector General have identified. The 2020 plan includes an effort to redesign the contract procurement process. HUD has recently appointed a chief procurement officer who will be responsible for improving HUD procurement planning and policies, reviewing and approving all contracts over $5 million, and implementing recommendations that may result from an ongoing study of HUD’s procurement practices by the National Academy of Public Administration. HUD is also establishing a contract review board, composed of the chief procurement officer and other senior HUD officials, that will be responsible for reviewing and approving each HUD program office’s strategic procurement plan and reviewing the offices’ progress in implementing the plans. In addition, HUD is taking actions to strengthen its monitoring of contractor activities by establishing standard training requirements for the HUD staff responsible for monitoring contractors’ progress and performance and by including standards relating to contractor monitoring in its system for evaluating employee performance. HUD is also planning actions to integrate its procurement and financial systems. We view these actions as positive steps. However, some key issues concerning their implementation remain to be decided, such as the relationship between the chief procurement officer and HUD’s Office of Procurement and Contracts, the precise role of the contract review board in overseeing HUD’s procurement actions, and HUD’s ability to have the necessary resources in place to carry out its procurement responsibilities effectively. Perhaps even more important is the extent to which these actions will lead to a change in HUD’s culture, so that acquisition planning and effective contractor oversight will be viewed by both management and staff as being intrinsic to HUD’s ability to carry out its mission successfully. The HUD 2020 Management Reform Plan appears to be the driving force behind agency operations. HUD has clearly linked its management reform efforts to the agency’s Results Act plans, so that its success in meeting annual performance goals and achieving strategic objectives depends on the success of the management reform efforts. In addition, HUD’s legislative proposals for 1997 support both its management reforms and strategic objectives. In its September 30, 1997, strategic plan, HUD stated that the plan builds on the foundation of the sweeping management reforms. Each of the plan’s objectives includes a discussion of the reform efforts that will affect the objective. The plan also notes that the Secretary’s mission to restore the public’s trust in HUD—one of the purposes of the 2020 HUD Management Reform Plan—permeates the Department and is an integral part of each and every objective in the strategic plan. The plan states the important linkage to the HUD 2020 Management Reform Plan: “To create a new HUD, we will need the full range of approaches set out in Strategic Plan and the Management Reform Plan. The success of these efforts is dependent on the success of the whole.” The annual performance plan, submitted to the Congress in March 1998, also provides a discussion of how the reform efforts affect each objective. The annual performance plan includes—in addition to the performance goals that are associated with specific strategic objectives—a number of performance goals for “management reform;” in both cases, the performance goals include indicators. The plan does not explicitly link the management reform goals to the strategic objectives or performance goals where there are logical opportunities to do so. For example, one proposed performance indicator under the management reform performance goals is “achieve a reduction in the number of troubled properties over the next five years.” This could logically support HUD’s strategic objective of increasing the availability of affordable housing in standard condition (one of whose indicators is “increase the percentage of project-based Section 8 units in standard physical and financial condition”), but the plan does not make this linkage. In reviewing HUD’s strategic plan, we observed that it contained a number of legislative proposals that appeared to affect the strategic objectives but did not make clear the impact on meeting the objectives if the legislative proposals were not enacted. We noted that the plan could be improved to better meet the purposes of the Results Act if this lack of clarity was addressed. More recently, in reviewing the annual performance plan, we noted that HUD does not discuss the impact on its annual performance goals if the proposed legislation is not enacted. In summary, Mr. Chairman, HUD is at a particularly crucial moment as it adapts to a significant loss of staff expertise; a workforce that includes large numbers of personnel assigned to new responsibilities; a new organizational structure with units whose specific duties, responsibilities, and operating procedures are still evolving; and the implementation of many new systems and processes. This situation merits the close attention of the Congress and HUD’s managers. We look forward to working with the Subcommittee in your oversight efforts. This concludes my prepared remarks. We will be pleased to respond to any questions that you or other Members of the Subcommittee might have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed management issues concerning the Department of Housing and Urban Development (HUD), focusing on: (1) the progress HUD has made in addressing management deficiencies and the need for additional improvement; (2) the activities under HUD's 2020 management reform and other efforts to address its deficiencies; (3) issues that GAO believes are key as HUD implements its management reforms; and (4) the relationship between HUD's reform efforts and its Government Performance and Results Act plans. GAO noted that: (1) HUD has made progress in addressing problems that led to GAO's high-risk designation, but much remains to be done; (2) prior to announcing the 2020 management plan, HUD had among other things: (a) addressed internal control weaknesses by implementing a new management planning and control program and reduced the material weaknesses identified under the Federal Managers' Financial Integrity Act (FMFIA) assessment; (b) continued to make progress in improving its information and financial management systems; (c) completed a field reorganization that transferred direct authority for staff and resources to the Assistant Secretaries; and (d) made some progress in addressing problems with staff members' skills and with resource management; (3) however, GAO's recent work and that of the Inspector General indicate the need for continued progress in these areas; (4) under HUD 2020 Management Reform Plan and related efforts, HUD is in the process of making significant changes that will affect most aspects of its operations; (5) the plan calls for reducing the number of programs, reducing staffing, retraining the majority of the staff, reorganizing the 81 field offices, consolidating processes and functions within and across program areas into specialized centers, and modernizing and integrating the financial and management information systems; (6) several interrelated issues are particularly important for achieving the intended benefits of HUD's management reform efforts: (a) HUD's ability to meet planned timetables for implementing key reforms; (b) the adequacy of staffing during and after the transition to the new HUD; (c) the Department's ability to reduce the number of troubled public housing authorities and troubled multi-family projects; and (d) HUD's ability to effectively improve its procurement and contracting practices, including its oversight of contractors; (7) the HUD 2020 Management Reform Plan appears to be the driving force behind agency operations, and it is clearly linked to the agency's strategic and annual performance plans required by the Results Act; (8) the degree to which HUD is successful in implementing its reform efforts will influence its success in meeting its goals and objectives outlined in the strategic and annual performance plans; (9) both appear to rely in part on many of the same legislative proposals that could affect HUD's staffing needs and the attainment of strategic objectives; and (10) HUD's strategic plan could be improved by clarifying the impact on meeting objectives if the legislative proposals are not enacted.
In 1934, the Eximbank was created to facilitate exports of U.S. goods and services by offering a wide range of financing at terms competitive with those of other governments’ export financing agencies. Such financing includes (1) loans to foreign buyers of U.S. exports; (2) loan guarantees to commercial lenders, providing repayment protection for loans to foreign buyers of U.S. exports; (3) working capital guarantees for pre-export production; and (4) export credit insurance to exporters and lenders, protecting them against the failure of foreign buyers to pay their credit obligations. The Eximbank is to absorb credit risks that the private sector is unwilling or unable to assume. Over the last 5 fiscal years, Eximbank financing commitments increased from $12.3 billion in 1992 to a high of $15.1 billion in 1993 and then declined to $11.5 billion in 1996. Because of the continued expansion of U.S. exports from $448 billion in 1992 to $706 billion in 1995, the proportion of U.S. exports supported by the Eximbank declined from 2.8 percent in 1992 to 1.7 percent in 1995. Although it is given broad discretion in implementing its programs, the Eximbank must comply with several statutory requirements. Among other things, the Eximbank is required to provide loans, loan guarantees, and export credit insurance at rates and on terms that are “fully competitive” with those of other foreign government-supported export credit agencies (12 U.S.C. sec. 635 (b)(1)(A),(B)); provide loans only in circumstances in which there is a reasonable assurance of repayment (12 U.S.C. sec. 635 (b)(1)(B)); seek to reach international agreements to reduce government-subsidized export financing (12 U.S.C. sec. 635(b)(1)(A)); and supplement and encourage, but not compete with, private sources of capital (12 U.S.C. sec. 635 (b)(1)(B)). The Eximbank operates under a renewable congressional charter that expires on September 30, 1997. The Eximbank’s activities and policies are overseen by its board of directors. The board, or appropriate designees, is responsible for approving support for individual transactions and making determinations of reasonable assurance of repayment. Prior to 1992, the budget did not measure the true costs of federal credit programs at the time of commitment. The Federal Credit Reform Act of 1990 (P. L. 101-508, Nov. 5, 1990) aimed to improve the budgeting of federal credit programs and requires government agencies, including the Eximbank, starting in fiscal year 1992, to estimate and budget for the total long-term costs of their credit programs on a net present value basis. Congress funds the Eximbank’s estimated credit subsidy costs (hereafter referred to as “subsidy costs”) through the annual appropriations process. Subsidy costs arise when the estimated program disbursements by the government exceed the estimated payments to the government, on a net present value basis. Administrative expenses receive separate appropriations and are reported separately in the budget. The act changed the budget treatment of credit programs so that their costs can be compared more accurately with each other and with the costs of other federal spending. (See app. I.) Executive branch agencies are required to calculate the subsidy costs of foreign loans and guarantees using annually updated ratings and risk premiums provided through the Office of Management and Budget’s (OMB) Interagency Country Risk Assessment System (ICRAS). Under this approach, each sovereign borrower or guarantor is rated on an 11-category scale ranging from A through F- -, although the Eximbank limits support to those rated in the top eight categories (A through E-). Generally speaking, A and B-rated markets are considered “low risk”; C, C-, and D markets are considered “medium risk”; and D-, E, and E- markets are considered “high risk.” In the future, many discretionary federal government programs, including the Eximbank’s programs, are projected to face increased budgetary constraints. The House Committee Report (104-600, May 29, 1996) accompanying the 1997 Foreign Operations, Export Financing, and Related Programs Appropriations Bill (H.R. 3540) states that the Appropriations Committee will be “hard pressed” to sustain appropriations for the Eximbank at current levels in future years and urged the Eximbank to consult with the Committee on its plans for overcoming the likely gap between demand and future federal resources. The OMB’s fiscal year 1997 Analytical Perspectives also projects a decline in Eximbank resources over the next 5 years from $726 million in fiscal year 1997 to $587 million in fiscal year 2001. From fiscal years 1992 to 1996, the Eximbank supported an average of $13.3 billion in export financing commitments (loans, loan guarantees, and export credit insurance) per year at an average subsidy cost of $750 million. These financing commitments supported U.S. exports to a number of low-, medium-, and high-risk markets. In fiscal year 1995, financing commitments to high-risk markets such as the NIS represented a relatively small percentage of the Eximbank’s total financing commitments but accounted for a relatively large share of total subsidy costs. As shown in figure 1, the Eximbank’s export financing commitments reached an all-time high of $15.1 billion in fiscal year 1993. According to the Eximbank, the subsequent decline in export financing commitments is largely attributable to the economic downturn in some Latin American countries and a cyclical decline in U.S. aircraft exports. The Eximbank’s subsidy costs ranged from a low of $603 million in fiscal year 1992 to a high of $937 million in fiscal year 1994 and dropped to $864 million in fiscal year 1996. This trend is a reflection of the Eximbank’s financing activity in high-risk markets. Credit subsidy (obligations) As shown in figure 2, most of the Eximbank’s fiscal year 1995 financing commitments were in the low- and medium-risk categories. Financing commitments for high-risk markets represented a relatively small (13 percent) share of total financing commitments yet absorbed a relatively large (44 percent) share of credit subsidy costs. Program subsidy (obligations) 2.”Other” category includes short-term insurance and working capital guarantees. In 1995, the Eximbank approved 2,049 financing transactions. As shown in figure 3, most of these transactions—83 percent—were made at or below a subsidy rate of 10 percent. The average subsidy rate for the 2,049 transactions was 5.6 percent; the range of the subsidy rates provided through these transactions varied from 0 percent to 63 percent.0% to 5.5% Eximbank support is provided to a variety of markets. In 1995, Latin America represented the largest single geographical region of Eximbank financing commitments and consumed the largest share (31 percent) of the Eximbank’s total subsidy costs. On the other hand, financing commitments to the NIS represented a relatively small share of overall financing commitments yet absorbed a relatively large share (23 percent) of the Eximbank’s total subsidy costs (see fig. 4). In fiscal year 1995, Mexico was the largest market for Eximbank-financed exports ($1.3 billion in total commitments) and absorbed the second largest individual share ($79 million) of the Eximbank’s total subsidy costs. In contrast, Russia was the fourth largest market ($521 million in fiscal year 1995) yet absorbed the largest individual share ($94 million) of the Eximbank’s subsidy costs. (See fig. 5.) Since 1992, Mexico has been the largest market for Eximbank-financed exports; Eximbank-supported exports to Russia increased from $65 million in fiscal year 1992 to $521 million in fiscal year 1995. One option we identified for reducing subsidy costs at the Eximbank would be to increase the fees charged for the Eximbank’s financing programs while still satisfying the congressional mandate for setting program fees at levels that are fully competitive with other ECAs. The Eximbank currently sets its fees so that they are as low as or lower than about 75 percent of the fees charged by other major ECAs in the same importing country markets. Our analysis showed that if the Eximbank had raised its fees to a level as low as or lower than 55-60 percent of the fees charged by other major ECAs in the same markets, the Eximbank’s subsidy cost would have been about $63 million less in fiscal year 1995. The actual cost reductions associated with any fee increase would depend on the magnitude of the fee increases and on other variables, such as the sensitivity of U.S. exporters to price increases, as well as the risk levels, terms, and conditions of future transactions. Eximbank officials expressed concerns that raising fees could affect the international competitiveness of U.S. exporters who rely upon Eximbank programs. The U.S. government has been an advocate for ongoing efforts among OECD members to establish guidelines for setting fees for government-supported export financing. Ideally, such guidelines would provide all ECAs, including the Eximbank, an opportunity to reduce their subsidy costs without putting their exporters at a disadvantage relative to their competitors. Any proposed Eximbank fee increases need to be considered within the context of the ongoing OECD negotiations to reduce government export credit subsidies. The Eximbank charges fees in an attempt to compensate for the financial risks associated with direct loans, loan guarantees, and insurance. (Credit subsidy costs arise when the present value of fees, principal repayments, and interest payments is below the levels necessary to offset the present value of the expected government outlays.) Under its system, the Eximbank places each borrower/guarantor in one of eight country risk categories—A, B, C, C-, D, D-, E, and E-. Fee rates are based primarily on the assessed risk of the particular credit and the repayment term of the transaction. For example, a transaction with a repayment term of 5 years in the lowest risk category (A) would be charged a fee of $1 per $100, whereas one in the highest risk category (E-) would be charged a fee of $7.59 per $100 of each disbursement. The Eximbank periodically revises its fee levels to help ensure that they appropriately reflect credit risks and, at the same time, remain competitive with those of other ECAs. In 1995, the Eximbank adopted a transaction pricing approach for assessing risk and assigning fees for individual, nonsovereign credits in order to better deal with a growing portfolio of private risk. The Export-Import Bank Act of 1945, as amended (12 U.S.C. 635), gives the Eximbank discretion to set fees at levels that are commensurate with risks, but at the same time at levels that are “fully competitive” with the pricing and coverage of the export credit programs offered by other major ECAs.According to Eximbank officials, the Eximbank has interpreted “fully competitive” to mean that the Eximbank’s fees should be as low as or lower than about 75 percent of the fees charged by other major ECAs in the same importing country markets. They noted that the Eximbank’s goal is not to beat the lowest fee, but to be in the “best 20-25 percent” range. The implementation of this benchmark means that the Eximbank’s fees are generally as low as or lower than those charged by foreign ECAs. To illustrate the potential savings associated with changing the benchmark, we considered three possible scenarios. Our analysis shows that the Eximbank could have reduced its subsidy costs by about $63 million if it had set its fees to be as low as or lower than 55-60 percent of the fees charged by other major ECAs for sovereign financing in the same importing country markets. We used Eximbank program authorization data for fiscal year 1995, Eximbank fee data, and the same OMB financial model that the Eximbank uses to calculate its subsidy costs. To perform this analysis, we modeled the effects of raising fees by different percentage rates while holding all other variables in the OMB model constant, such as the dollar value of Eximbank-supported transactions, the interest rate, and the repayment term. Using the same methodology, we estimated that the Eximbank could have achieved approximately $35 million in subsidy savings by setting its fees so that overall, they were as low as or lower than about 65-70 percent of the fees for sovereign financing charged by its major ECA competitors in the same markets, and approximately $84 million in savings by setting fees so that they were as low or lower than about 45-50 percent of the fees charged by these competitors. (See table 1.) According to the Eximbank, several factors play a role in the competitiveness of the loan, guarantee, and insurance programs that the Eximbank offers. These include both external factors (those that contribute to the overall demand for Eximbank support) and factors that are, to a greater extent, within the control of the Eximbank, such as the fees it charges and other technical program characteristics. Although interest rates and repayment terms for export credits (i.e., direct loans) are elements of cost competitiveness, they are highly constrained by the provisions of the OECD’s Arrangement on Guidelines for Officially Supported Export Credit. These provisions limit the variability of interest rates charged and repayment terms allowed by member ECAs. Since exposure fees charged for loans, loan guarantees, and insurance are now excluded from the OECD Arrangement, differences in fees are the most significant factor accounting for program cost differences between the Eximbank and other ECAs. When we asked Eximbank officials about the prospects for future fee increases, they responded that the Eximbank had raised fees in August 1994 and expressed concerns about how future increases would affect U.S. export competitiveness. They said that should the Eximbank lower its “competitiveness target,” the number of instances in which U.S. exporters would risk losing sales as a result of uncompetitive financing would also increase. At the same time, Eximbank officials said it is difficult, if not impossible, to predict the impact of fee changes on exporter behavior, because exporters’ sensitivity to fee changes would vary by transaction. As noted earlier, Eximbank fees are generally as low as or lower than those of its major competitors in most country markets. In some of the higher-risk markets, the Eximbank’s fee advantage over some of its competitors is even greater. For example, Eximbank fees are as low as or lower than those of its major competitors for about 85 percent of medium-term transactions in high-risk markets. Thus, we believe it would be possible for the Eximbank to further offset subsidy costs by raising fees while remaining competitive relative to other major ECAs. For instance, if fees were set at the 65-percent competitiveness level, Eximbank fees would still be as low as or lower than those of other ECAs in 65 percent of the same importing country markets, and the proportion of cases in which its fees would be higher than other ECAs would increase from 25 percent to 35 percent. The trade implications of increasing the Eximbank’s fees are uncertain and would depend in part on the magnitude and timing of such action. It is possible that charging higher fees would result in an incremental reduction in program participation on the part of U.S. exporters selling to some higher-risk markets. However, the overall impact of Eximbank fee increases are speculative. U.S. exporters’ sensitivity to a fee increase would depend on factors such as the size of the fee increase, the volume of U.S. exports to a particular market, and the risk of the importing market. Raising fees for financing transactions in the higher-risk markets, such as the NIS, could lead to a decline in U.S. exports to these countries, but we were unable to quantify the precise impact. The Eximbank, under the leadership of the U.S. Treasury, participates in ongoing OECD negotiations to minimize export financing competition and reduce government export credit subsidies. In order to remain competitive, any potential Eximbank fee increase should be considered within the broader context of progress made in these international negotiations. The United States, European Union (EU) member states, and other countries are attempting to limit government export credit subsidies and create a level playing field among their ECAs through the OECD. The OECD has promoted efforts to limit government subsidies and provide common guidelines for national export-financing assistance programs. The OECD’s Arrangement sets terms and conditions for government-supported export credits and has been progressively strengthened since it was first established in 1978. Although it was last modified in 1994 to require member countries to use only market-based interest rates on all government-provided export loans, it does not currently contain guidelines as to the minimum fees member ECAs must charge for officially supported loans, loan guarantees, or export credit insurance. In 1994, the participants in the OECD Arrangement formed a Working Group of Experts on Premia and Related Conditions to create a framework for more uniform risk premiums (i.e., exposure fees). The working group’s goal is to develop guiding principles for setting fees, among other issues, before the 1997 OECD Ministerial Meeting. As part of this overall effort to gradually reduce government export finance subsidies, OECD members have tentatively agreed to work toward creating member export financing systems that include, among other things, (1) risk-based premiums (defined to include exposure fees) based on a common reference country classification system, (2) premiums that are set high enough to cover long-term operating costs and losses, and (3) establishment of a fee benchmarking system. According to an Eximbank official, it is hoped that these negotiations will eventually lead to an agreement for fee convergence that would allow for reductions in the costs of OECD members’ officially supported export financing programs. The agreement would be implemented after an appropriate transition period. The U.S. government is an advocate for reaching an OECD agreement in this area. Although the working group has developed a set of broad guiding principles, members have yet to agree on the extent to which fees should be covered by the OECD disciplines (practices), according to the Eximbank. The level and scope of the risks of the Eximbank’s programs could be reduced by several means, such as placing a ceiling on the maximum subsidy rate allowed in Eximbank transactions, reducing or eliminating program availability offered in high-risk markets, and offering less than 100-percent risk protection. Although these options, if implemented, could lead to significant subsidy savings for the Eximbank and an overall reduction in U.S. exposure to high-risk markets, they would also result in reduced levels of Eximbank-financed exports and could present important foreign policy tradeoffs. Representatives of the Eximbank, private financial institutions, and export finance trade associations we spoke to generally opposed making any changes to the Eximbank’s programs on the grounds that potentially disruptive effects would result. As shown in table 2, the options we identified, if implemented separately, would have resulted in subsidy savings of up to $157 million in fiscal year 1995, with only a slight effect (5 percent or less of total exports financed) on the overall level of U.S. exports supported with Eximbank financing. The estimated subsidy reductions and export losses listed were based on our analysis of Eximbank subsidy estimates and authorized commitments for fiscal year 1995. Our estimates assumed that all other factors, such as the volume of financing to specific markets, were unchanged and there was no reaction by other ECAs. To reduce its subsidy costs, the Eximbank could consider placing a cap (limit) on the maximum subsidy rate that it could incur in a typical transaction (this would not include tied aid transactions). For example, although most of the 2,049 transactions completed in fiscal year 1995 had a 10-percent subsidy rate or less, a relatively few (38) transactions had a subsidy rate of 25 percent or more. These transactions consumed approximately $123 million (about 18 percent) of the Eximbank’s total subsidy costs yet supported 3 percent of Eximbank export financing commitments for the year. We estimate that if the Eximbank had capped its subsidy rate for each transaction at 25 percent for fiscal year 1995, it would have saved about $123 million (assuming that fees were held constant and deals over a subsidy rate of 25 percent were not refinanced at lower rates). Eximbank officials noted that a subsidy cap could have a disproportionate impact on high-risk markets, such as the NIS, and therefore would limit the Eximbank’s ability to support high-risk transactions in these markets. However, we noted that this option consists of a subsidy cap that exceeds the average subsidy rate (21.6 percent) for all Eximbank-supported transactions to the NIS. In other words, this option is unlikely to eliminate all financing transactions in this market—just those with a subsidy rate greater than 25 percent. Before 1994, the Eximbank provided short-, medium-, and long-term financing in low- and medium-risk markets. In 1994, as part of an effort to provide additional support to emerging market economies, the Eximbank expanded the level of financing available to U.S. exporters to high-risk E and E- markets to include long-term coverage for E markets and medium-term cover for E- markets. As a result, in fiscal year 1995, Eximbank services were available in more E- markets than were those of any of its major competitors, and the Eximbank provided unrestricted program coverfor almost twice as many markets as did Germany, its nearest competitor. The Eximbank cannot operate in certain high-risk markets (F risk category) because, in the judgment of its board, the Eximbank cannot expect reasonable assurance of repayment. Another way in which the Eximbank could reduce its risk and associated subsidy costs in the high-risk (E and E-) categories would be to provide only short-term support in high-risk markets. Short-term transactions typically involve lower risks and subsidy expenditures than medium- and long-term transactions in the same market. We estimated that this option would have reduced subsidy costs by up to $157 million (23 percent of the Eximbank’s total subsidy costs) and reduced the Eximbank financing commitments by approximately $582 million, or about 5 percent, in fiscal year 1995. Another option that the Eximbank could consider would be to withdraw completely from E- markets; this would substantially reduce its exposure to high-risk transactions. Eliminating coverage in the Eximbank’s most risky markets (E-) would have produced subsidy savings of up to $122 million and eliminated approximately $394 million in Eximbank export financing commitments in fiscal year 1995—about 3 percent of the Eximbank’s total financing commitments for the year. This option would result in a reduction of Eximbank subsidy costs but there could be implications for trade promotion and foreign policy objectives. First, Eximbank officials and some U.S. exporters stated that while exports to high-risk (E and E-) markets are small relative to total U.S. exports, the long-term export potential of these markets could be substantial. Eximbank officials point out that one of the agency’s primary objectives is to help U.S. exporters gain an early foothold in the high-risk, potentially high-growth markets in which the private sector is unable or unwilling to venture. Second, reducing program availability in high-risk markets would result in a reduction in Eximbank-supported transactions to transitional market economies, such as Russia, that the U.S. government’s foreign policy establishment is trying to assist. Eximbank officials said that the Eximbank is an independent agency and not part of the U.S. foreign policy or assistance apparatus ( i.e., the Eximbank will not support a noncreditworthy transaction to meet U.S. foreign policy objectives). However, they stated that restrictions on Eximbank financing to high-risk markets such as the NIS may be perceived as a reduction of U.S. support for the region and detract from the U.S. government’s efforts to promote regional stability. The potential negative effect of this option on trade and foreign policy objectives could be moderated in a number of ways. Various export financing techniques exist that would allow the Eximbank to reduce the risks of some of the transactions in these markets. For example, the Eximbank’s Russian Oil and Gas Framework Agreement, which gives support for longer-term transactions that generate hard currency earnings, has relatively lower risks and thus is the recipient of full Eximbank support. These transactions are budgeted at lower subsidy rates. It is also important to note that a number of other federal programs support U.S. foreign policy objectives in the NIS. Since 1990, 23 government agencies have obligated $5.4 billion for technical assistance programs, grants, exchange programs, training, food and commodity donations, science and technology projects, and support of joint space efforts. The U.S. government also made available $10 billion in credit for bilateral loans, loan guarantees, and insurance programs for fiscal year 1990 through December 1994. Trade and investment programs include those sponsored by the Department of Agriculture and the Overseas Private Investment Corporation (OPIC). Thus, concerns about restrictions on Eximbank support and its impact on U.S. policy objectives in this region would need to be viewed in the broader context of the overall U.S. assistance effort. Currently, the Eximbank provides 100-percent unconditional political and commercial risk protection on virtually all of the medium- and long-term cover that it issues. Some of the Eximbank’s major competitors, such as European ECAs, on the other hand, generally require exporters and banks to assume a portion of the risks associated with such support and do not absorb 100 percent of the risks involved. Instead, they require that exporters or banks assume a minimum percentage (usually 5 percent to 10 percent) of the risks. This concept of risk-sharing is a fundamental difference between the Eximbank and EU ECAs. One way to reduce the Eximbank’s subsidy costs across all risk categories would be to have private sector participants assume more of the risks in Eximbank-supported transactions. For example, if the Eximbank only financed up to 95 percent of the risks of an export transaction, private sector banks or exporters would have to assume the remaining 5 percent risk. According to Eximbank officials, this requirement may also provide private sector lenders greater incentives to properly evaluate loan applications for which they seek Eximbank support because they will share more of the risks associated with such transactions. Eximbank officials said that the Eximbank does not generally require the private sector to engage in greater risk-sharing in its loan guarantee program because the private sector is usually unwilling to accept the risks associated with Eximbank-financed transactions. These officials cited a number of other concerns related to greater risk-sharing, including (1) the presence of bank regulatory requirements that banks maintain higher loss reserves for foreign loans not fully covered by Eximbank guarantees, (2) the higher cost of trade financing that would result if private lenders were required to raise their fees to compensate for additional export risks, and (3) the reluctance of smaller U.S. banks to engage in trade finance if they have to take on additional risk. Representatives of trade associations that we interviewed also stated that greater risk-sharing requirements will frustrate the small businesses that have fewer options to structure alternative financing than do large firms. Eximbank officials also told us that the introduction of increased risk-sharing requirements may result in higher administrative costs to the Eximbank and may also impair private banks’ ability to “securitize” loans backed by Eximbank guarantees. However, several factors may mitigate some of the effects of a requirement for increased risk-sharing and should be considered as well when assessing the feasibility of this option. According to Eximbank officials, the agency used to require some risk-sharing and only started offering 100-percent risk coverage (principal and interest) through its loan guarantee program beginning in the late 1980s. Before that, it offered 95-percent loan guarantees on interest. Similar objections about increased private sector risk-sharing requirements were raised when the Eximbank reduced its risk coverage on its export working capital guarantee program from 100 percent to 90 percent of principal and interest in September 1994. However, U.S. exporters and their commercial lenders continued to use this program at an increased rate after these changes went into effect. The volume of Eximbank export working capital loan guarantees increased 99 percent from fiscal year 1994 to fiscal year 1995. U.S. banks that we interviewed acknowledged that they utilize the services of competitor ECAs that generally provide less than 100-percent risk coverage in support of their trade finance activities. Private sector lenders and a representative of Moody’s Investor Services stated that Eximbank-backed loans with less than 100 percent cover could still be securitized, although the structure and pricing of the security would reflect the higher marginal risk associated with the reduced U.S. government cover. Project financing is a rapidly expanding export financing mechanism that the Eximbank is using to meet the needs of U.S. exporters and project sponsors while taking advantage of the growth of privatization and private sector-oriented reforms in various developing countries. It involves lending for major capital projects where the assurance of repayment is provided through the project’s structure and anticipated future revenues rather than through sovereign or other forms of guarantee. In contrast to traditional Eximbank financing, the Eximbank and lending institutions depend primarily upon the financial success of the project for the repayment of loan principal and interest. Project financing provides the Eximbank considerable flexibility in stipulating which risks it will assume on a project-by-project basis, thus permitting the government to reduce risk and subsidy usage. Eximbank project financing support is generally available in A through D risk markets, is available in E markets only in limited circumstances, and is not available in E- markets. Project finance requires a relatively stable legal and commercial environment in the host country in order for risk mitigation of the project to be possible. Some high-risk markets do not yet have the legal and commercial structures that would make project finance possible. Once a project is completed, the Eximbank provides the same level of risk coverage, that is, 100 percent guarantees, under its project finance program as is currently available in its traditional program—but only for the share of the project that is financed by the Eximbank. During the project construction period, the Eximbank provides only political risk coverage; other risks are assumed by the private sector. The Eximbank’s policy is that the project sponsor and other participants must assume a portion of the entire project risk and all or most of the commercial and technical risks during the construction phase of an infrastructure project. The Eximbank may also share project risks with other ECAs, multilateral institutions such as the International Finance Corporation, or OPIC. The Eximbank may also look for opportunities to establish hard currency escrow accounts outside the project country for certain projects and seek other risk mitigation to protect taxpayer interests and to reduce the associated subsidy costs. According to the Eximbank, if properly structured, these techniques can lower the risks for the Eximbank that would otherwise be involved in supporting these transactions in risky countries. Because this program is fairly new, it is too early to determine if budget estimates are accurate. Under the project finance program, many of the administrative costs that the Eximbank traditionally incurs in evaluating a project’s financial, legal, and technical risks are to be borne by the private sector rather than the Eximbank. When the Eximbank assumes any risks, private sector project participants are expected to pay premiums that compensate the government for most of these risks. However, the Eximbank’s ultimate flexibility in passing these costs on to the private sector may be constrained by the level of fees charged by competitor ECAs. Eximbank officials told us that the Eximbank’s goal is to structure project financing transactions in a manner that will ultimately require no taxpayer subsidy. Although the Eximbank has yet to meet this program goal, the average subsidy rate for project finance transactions was lower than the overall subsidy rate incurred for all Eximbank transactions in 1996. In that year, the average budgetary cost for project finance deals was about 3 percent, whereas the average cost for all Eximbank-supported transactions was 7.5 percent. According to Eximbank officials, the Eximbank’s success in meeting this goal will ultimately depend on its willingness and ability to limit risk in high-risk transactions while complementing, but not competing with, the private sector. The Eximbank’s project finance program has expanded over the past few years and has accounted for an increasing proportion of Eximbank transactions. In 1993, project finance accounted for less than 1 percent of the Eximbank’s total financing commitments. By fiscal year 1995, project finance constituted almost 20 percent of the Eximbank’s total financing commitments. The Eximbank attributes this growth to developing countries’ emphasis on privatization, their need to reduce sovereign debt obligations, and the rapid economic growth in emerging markets. As shown in figure 6, Eximbank project financing commitments expanded from $150 million in 1993 to $2.1 billion in fiscal year 1995—the program’s first full year of operation. Eximbank project financing declined slightly, to $1.7 billion, in fiscal year 1996. From fiscal year 1993 to 1995, the Eximbank approved a total of 11 project finance transactions valued at more than $2.6 billion. Six of these projects were located in Asia, four in Latin America, and one in Europe. These projects were generally located in countries that the Eximbank rates as medium-risk category countries and were mostly power generation infrastructure projects. The growth of project financing is consistent with the Eximbank’s expanded financing of exports to private sector buyers in developing countries. According to Eximbank officials, in fiscal year 1992, 71 percent of the Eximbank’s financing commitments were used to support foreign public sector purchasers, while 29 percent was used to support foreign private sector purchasers in emerging markets around the world. By fiscal year 1995, the ratio was reversed—about 35 percent of the Eximbank’s commitments supported foreign public sector purchases and about 65 percent supported foreign private sector purchasers. Although private sector purchasers are becoming larger users of export financing in developing countries and the Eximbank’s project financing program provides the flexibility necessary for reducing the subsidy cost of these transactions, this method of financing is not suitable for all transactions. It is best suited for large transactions, for example, major infrastructure projects, that generate revenues that are sufficiently high to repay debt obligations. Eximbank financing helps support the sale of billions of dollars of U.S. goods and services to foreign markets each year consistent with U.S. foreign policy interests but comes at a cost to U.S. taxpayers—about $3.75 billion in appropriated program funds over the last 5 years. OMB projects a substantial decline in these resources over the next 5 years. We identified two options for reducing the Eximbank’s subsidy costs: (1) raising fees (based on a modified definition of “fully competitive”) and (2) reducing program risks. If implemented, these options may help the Eximbank respond to the projected decline in resources over the next 5 years. These options would not require a change in the Export-Import Bank Act of 1945, as amended, because they fall within its present authority. However, these options need to be considered within the full context of their trade and foreign policy implications and should be consistent with the Eximbank’s other statutory obligations. Raising exposure fees within the context of ongoing international negotiations to reduce government export credit subsidies seems to be the least disruptive of the two options for a number of reasons. First of all, Eximbank fees for sovereign financing are generally lower than those of other ECAs in most country markets. Second, a fee increase could be implemented without raising some of the foreign policy concerns associated with restricting or eliminating program coverage in certain risky markets. Third, exposure fee increases are compatible with U.S. government efforts to minimize competition in government-supported export financing among OECD members and consistent with its legal directive to do so. The magnitude and timing of any fee increases should take into account progress in the ongoing OECD negotiations to minimize the possible competitive impact on U.S. exporters. In commenting on a draft of this report, the Eximbank made four general observations: The subsidy cost computations should only be interpreted as approximations of the cost of the Eximbank’s programs. Because no estimate of the amount of lost exports, jobs, federal tax revenue, or other potential adverse consequences of raising fees or lowering the amount of the Eximbank’s risk have been developed, the implementation of the options presented needs to be considered with great caution. Taking unilateral action to increase fees could undermine the U.S. negotiating position with other OECD members. Other alternatives for reducing the Eximbank’s budget that would not adversely affect the Eximbank’s programs have not been considered. Regarding our cost computations, we used the same methodology for our analysis that the Eximbank uses to estimate the subsidy costs for its official budget submission to Congress. We agree that the methodology utilized to meet the requirements of the Credit Reform Act yields estimates and that the actual costs for a particular case may be higher or lower than the estimate. However, since the actual costs cannot be predetermined, these estimates can be used to make rational program decisions. In preparing this report we did consider the likely impact of the options we identified on Eximbank-supported exports. We acknowledge that we could not precisely quantify the impact of implementing the fee increases. Nevertheless, our review indicated that the Eximbank could raise fees while still maintaining a competitive position relative to other ECAs. Moreover, we did estimate that the reduction in Eximbank-financed sales associated with reducing program risks would only have a slight effect (5 percent or less) on the overall level of U.S. exports supported with Eximbank financing in a given year. Our past work has shown that no definitive empirical research exists that demonstrates unequivocally the net macroeconomic impact on the nation—positive or negative—of government funding for federal export promotion programs or of reductions in Eximbank funding levels. It is difficult to fully quantify the net benefits of federal export promotion programs because it is difficult to demonstrate “additionality,” that is, the level of exports that would have been provided in their absence. Proponents and opponents of these programs have mainly relied on qualitative arguments to state their cases rather than demonstrate quantitatively the impact on exports, jobs, and federal tax revenue. We recognize that the most efficient means to reduce Eximbank subsidies without disadvantaging U.S. exporters is to reach agreement with other ECAs to lower and eventually eliminate export subsidies. We clearly stated in the report that any proposed fee increases should be considered only in the broader context of the ongoing OECD efforts to negotiate minimum fee schedules and that the magnitude and timing of such an increase should take into account progress in these negotiations. Eximbank also commented that we did not recognize other alternatives for reducing the Eximbank’s budget without adversely affecting its program. While other options for reducing the Eximbank’s funding may exist, we believe that we focused on the two most feasible ones—raising fees or reducing program risks—that could be implemented within the Eximbank’s existing authority. The two options proposed by the Eximbank (changing its program mix and making greater use of asset-based financing) would have some drawbacks that were not disclosed in its letter. For example, the budgetary and possible implementation difficulties associated with the greater use of Eximbank loans as opposed to guarantees are not addressed in the Eximbank’s comments. In addition, the potential for extending asset-based financing beyond the areas noted in the Eximbank’s comments is uncertain. Finally, it is not clear whether OMB would endorse any of the Eximbank options noted in its letter. The Eximbank’s comments are reprinted in appendix II, along with our specific evaluation of them. The Eximbank also provided technical corrections and updated information that were incorporated throughout the report where appropriate. To develop information on how the Eximbank spends its program funds, we reviewed budget data provided to us by the Eximbank and reviewed various Eximbank reports, including annual reports, budget reports from the Office of the Chief Financial Officer, and the Eximbank’s 1992-96 Report to the U.S. Congress on Export Credit Competition and the Export-Import Bank of the United States. In addition, we completed a transaction analysis of the Eximbank’s financing commitments made in fiscal years 1994 and 1995, including an analysis of the Eximbank’s high-subsidy transactions. We defined “high-subsidy” transactions as those transactions that consumed $1 million or more of the Eximbank’s subsidy budget in a given year or consumed a subsidy of 15 percent or more of the financed amount. We did not independently verify the accuracy of this data. Our report focused on the Eximbank’s use of its program subsidy appropriation—the largest component of its annual appropriation. To create a conceptual framework for identifying and assessing the available options for reducing the Eximbank’s subsidy appropriation, we reviewed various governmental, research, and trade association reports, including those of the Eximbank, the Congressional Budget Office, the Institute for International Economics, the CATO Institute, the Coalition for Employment Through Exports, and the National Association of Manufacturers. We also interviewed officials from these organizations and the private banking industry to obtain their views on the feasibility and likely impact of the options. To illustrate potential subsidy savings associated with different levels of fees, we estimated the possible subsidy savings that would have been obtained in fiscal year 1995 by setting fees within the 45-50, 55-60, and 65-70 percent competitiveness levels, holding all other variables (such as dollar value of transactions, interest rates, and repayment terms) constant. We used aggregate Eximbank fee data and an OMB financial model to perform this analysis. (We did these calculations by setting fees at a level that fell within the range we specified. Specifically, the fees selected for our analysis at the 45-50, 55-60, and 65-70 percent competitiveness levels included medium- and long-term fees set at the 47th and 46th, 57th and 56th, and 66th and 65th percentiles, respectively). These fee comparisons were based on medium- and long-term sovereign financing, which is currently the only basis for comparison, although we recognize that an increasing portion of Eximbank program activity is for nonsovereign transactions. Eximbank officials told us that any interpretation of such analysis must include a recognition of the potential pitfalls of fee comparisons. The differences in program characteristics, coverage restrictions, and other variables may limit the accuracy of such comparisons. We did not independently verify the accuracy of the Eximbank’s fee data or test the OMB’s ICRAS subsidy model for accuracy in predicting subsidy cost estimates. Instead, we have accepted the validity of the current model and explored the options for reducing program subsidies through the data generated by the current OMB-approved model. To assess the effects of limiting program risks, we identified the Eximbank’s financing and subsidy commitments made in various risk categories. We focused on the effects of limiting program risks in higher-risk markets because the Eximbank’s subsidy costs in these markets are large relative to other markets. We did not model the effects that increased risk-sharing would have on the Eximbank’s subsidy expenditures; rather, we accepted the analysis that the Eximbank completed on this issue. To complete our work related to project finance issues, we reviewed trade and academic literature and interviewed project specialists at the Eximbank and the World Bank, and interviewed financial experts in Washington, D.C.; New York; and London. We conducted our review from February 1996 to September 1996 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to other congressional committees and the Chairman and President of the Eximbank and will make copies available to other interested parties upon request. This review was done under the direction of JayEtta Z. Hecker, Associate Director. If you have any questions concerning this report, please contact Ms. Hecker at (202) 512-8984. Major contributors to this report are listed in appendix III. After over 20 years of discussion about the shortcomings of using cash budgeting for credit programs and activities, the Federal Credit Reform Act of 1990 was enacted. The act changed the budget treatment of credit programs so that their costs can be compared more accurately with each other and with the costs of other federal spending. Prior to the act’s implementation in fiscal year 1992, it was difficult to make appropriate cost comparisons between direct loan and loan guarantee programs and between credit and noncredit programs. Credit programs—like other U.S. government programs—were reported in the budget on a cash basis (i.e., loan guarantees did not show up in the budget unless and until they defaulted). This created a bias in favor of loan guarantees over direct loans. In the budget year, loan guarantees appeared to be free, while direct loans appeared to be expensive because the budget did not recognize that at least some of the loan guarantees would default and that some direct loans would be repaid. Under the act, the President’s budget for fiscal year 1992 and after must include the total estimated net cost to the U.S. Export-Import Bank (Eximbank) of the cash flows, discounted to present value, of its direct loans, guarantees, and insurance. Credit reform requirements separate the government’s cost of extending or guaranteeing credit, called the “subsidy cost,” from administrative costs. Administrative expenses receive separate appropriations and are reported separately in the budget. The Credit Reform Act defines the subsidy cost of direct loans as the present value—at the time of disbursement—of the net cash flows, that is, the disbursements by the government minus estimated payments to the government after adjusting for projected defaults, prepayments, fees, penalties, and other recoveries. The act defines the subsidy cost of loan guarantees as the present value—at the time of disbursement—of cash flows from estimated payments by the government (for defaults and delinquencies, interest rate subsidies, and other payments) minus estimated payments to the government (including fees, penalties, and recoveries). Agencies subject to the Credit Reform Act, such as the Eximbank, use a special budget system to record the budget information necessary to implement credit reform. Three types of accounts—program, financing, and liquidating—are used to handle credit transactions. Credit obligations and commitments made on or after October 1, 1991—the effective date of credit reform—use only the program and financing accounts. The program account receives separate appropriations for the administrative and the subsidy costs of a credit activity. When a direct or guaranteed loan is disbursed, the program account pays the associated subsidy cost for that loan to the financing account. Figure I.1 diagrams this cash flow. If subsidy costs are accurate, the financing account will break even over time as it uses its collections to repay its Treasury borrowings. The program account has a permanent, indefinite appropriation for re-estimates made to cover estimation errors. Credit activities conducted before October 1, 1991, are reported on a cash basis in the liquidating account. This account continues the cash budgetary treatment used before credit reform and has permanent, indefinite budget authority to cover any losses. Loan disbursements, payments for loan guarantees Collections(i.e., fees, principal/interest, and recoveries from defaults. The following are GAO’s comments on the Eximbank’s letter dated November 6, 1996. 1. The Eximbank’s portfolio reestimates are not directly comparable to the Eximbank’s credit reform estimates made for fiscal year 1996. Portfolio reestimates are conducted for the Eximbank’s past commitments that are still on the books, typically spanning a number of years. These projections are also estimates and the actual costs of a particular commitment may ultimately be higher or lower. Credit reform subsidy allocations reflect only those commitments made in a given fiscal year. 2. The report noted that it is difficult to estimate the full impact of fee changes on exporter behavior and states that U.S. exporters’ sensitivity to a fee increase would depend on several factors, including the size of the fee increase, the volume of U.S. exports to a particular market, and the credit risk of the importing market. We did acknowledge that raising fees for financing transactions in the higher-risk markets, such as the newly independent states of the former Soviet Union (NIS), could lead to a decline in U.S. exports to these markets. 3. The report notes that to minimize the possible competitive impact on U.S. exporters, any proposed Eximbank fee increases should only be undertaken in the broader context of ongoing Organization for Economic Cooperation and Development (OECD) efforts to reduce government export subsidies. 4. Export financing commitments could still be made in high-risk markets even under the subsidy cap option we have identified. This option would not necessarily eliminate the financing of transactions to high-risk markets—just those with a subsidy rate exceeding a certain threshold. Furthermore, transactions with subsidy rates exceeding the threshold could potentially be restructured. Moreover, the foreign policy concerns associated with reducing Eximbank coverage to the NIS could at least partially be mitigated by numerous other U.S. assistance programs that we identified. 5. The scope of our review did not include a consideration of this option. Thus, we cannot comment on the feasibility or the full implications of this option contained in the Eximbank’s comments. 6. As we noted in the report, our fee comparisons were based solely on sovereign lending rates—the only type of lending where comparative data was available. 7. We acknowledge that several factors may play a role in determining the overall competitiveness of the Eximbank’s programs. However, fees are generally the most significant difference between the Eximbank and other export credit agency (ECA) programs. Other factors, such as interest rates and repayment terms, are highly constrained by OECD agreements. Richard Burkard, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed: (1) how the Export-Import Bank of the United States (Eximbank) spends its program appropriation; (2) program options that the Eximbank may want to consider to reduce the cost of its export financing programs; (3) potential implications of these options; and (4) the nature and extent of Eximbank's involvement in a type of financing known as project financing. GAO found that: (1) in each of the last 5 fiscal years (FY) 1992 through 1996, the Eximbank has used an average of $750 million of its credit subsidy appropriation to support an average of $13.3 billion in export financing commitments; (2) these appropriations have facilitated exports to areas with important U.S. commercial and strategic interests; (3) high risk markets constituted a relatively small share of the Eximbank's total financing commitments yet absorbed a relatively large share of its subsidy costs in FY 1995; (4) GAO identified two broad options that would allow the Eximbank to reduce subsidies while remaining competitive with foreign export credit agencies (ECA): (a) raising fees for services; and (b) reducing the risks of its programs; (5) both of these options could result in significant reductions in subsidy costs and would allow the Eximbank to continue to operate with reduced federal funding; (6) the specific level of subsidy savings resulting from these program options would be dependent on several factors, including the willingness of exporters and participating banks to absorb increased costs and risks and the reaction of competitor ECAs; (7) the options GAO identified have several trade and foreign policy implications that decisionmakers would need to address before making any changes in the Eximbank's programs; (8) Eximbank officials noted that: (a) any proposed fee increases need to be considered within the broader context of current international efforts to gradually reduce government export finance subsidies; (b) these options could make Eximbank programs less competitive relative to other ECAs; and (c) these options would undermine U.S. government efforts to provide support in some higher-risk markets; (9) the project finance program was created to help U.S. exporters and project lenders compete for contracts for large capital projects in various developing countries; (10) the program has expanded over the past few years and has accounted for an increasing proportion of Eximbank transactions; (11) for FY 1996, project finance deals constituted about 3 percent of the Eximbank's total subsidy costs; (12) although project financing techniques appear to highly leverage available Eximbank resources, Eximbank officials said that this technique is suited only to long-term capital projects that the Eximbank expects to be self-sustaining; and (13) the Eximbank aims to structure its project finance program so as to limit its risks and minimize its budgetary costs.
FAA is the key federal agency responsible for certification of U.S. aviation products to be used in the United States and has a significant role in supporting approvals of U.S. products in other countries. Located in FAA’s Office of Aviation Safety (Aviation Safety), the Aircraft Certification Service (Aircraft Certification) issues certificates, including type certificates and supplemental type certificates, for new aviation products to be used in the national airspace system. Certification projects, which involve the activities to determine compliance of a new product with applicable regulatory standards and to approve products for certificates, are typically managed by one of Aircraft Certification’s local offices (generally known as aircraft certification offices, or ACOs).illustrates the range of U.S.-manufactured aviation products—including aircraft, helicopters, propellers, and engines—for which Aircraft Certification issues type certificates and supplemental type certificates once all requirements are met. Aircraft Certification is implementing and has set milestones for completing 14 initiatives in response to May 2012 recommendations of the Certification Process Committee. This Committee was chartered to make recommendations to Aircraft Certification to streamline and reengineer its certification process, improve efficiency and effectiveness within Aircraft Certification, and redirect resources for support of certification. Several of the initiatives were originally begun as part of earlier certification process improvement efforts. The initiatives range from developing a comprehensive road map for major change initiatives, to reorganizing the small aircraft certification regulations. Although we reported in 2013 that the Certification Process Committee’s recommendations were relevant, clear, and actionable, it is too soon for us to determine whether FAA’s 14 initiatives adequately address the recommendations. According to an update prepared by FAA in January 2015, eight initiatives have been completed, and two are on track to be completed within 3 years. However, according to this update, one initiative was at risk of not meeting planned milestones, and three initiatives will not meet planned milestones, including the update to 14 C.F.R. Part 21—the regulations under which aircraft products and parts are certificated. We reported in July 2014 that this initiative was in danger of not meeting planned milestones because the October 2013 government shutdown delayed some actions FAA had planned to move it into the rulemaking process.In its January 2015 update, FAA indicated that the formal rulemaking project timeline has been delayed to late fiscal year 2015 to allow for additional work with industry on developing guidance material and new certificate holder requirements. Figure 4 illustrates the evolving status of the 14 initiatives based on the publically-available periodic updates reported by FAA. We found in October 2013 that Aircraft Certification lacked performance measures for many of these initiatives. As of July 2014, FAA had developed metrics for measuring the progress of the implementation of 13 of the 14 initiatives. According to FAA officials, they plan to develop these metrics in three phases. For the first phase, which was documented in the July 2014 update of its implementation plan, FAA developed metrics to measure the progress of the implementation of the initiatives. For the second phase, FAA plans to develop metrics for measuring the outcomes of each initiative. For the third phase, working with the Aerospace Industries Association and General Aviation Manufacturers Association, FAA plans to develop metrics for measuring the global return on investment in implementing all of the initiatives, to the extent that such measurement is possible. FAA did not provide us a time frame for developing the second and third phase metrics. While we continue to believe that this plan for establishing performance measures is reasonable, and recognizing that FAA is in the early stages of implementation, it is critical for FAA to follow through with its plans for developing and utilizing metrics to evaluate improvements to the certification process. Without these metrics, FAA will be unable to fully determine whether its efforts have been successful in addressing the Certification Process Committee’s recommendations as intended, identify areas that may need more attention, and modify efforts to sufficiently address any gaps. In our previous work, we have reported on instances where the implementation and metrics related to FAA efforts have not achieved the intended outcomes as expected, e.g., modernizing the air traffic control system under the Next Generation Air Transportation System (NextGen) and the integration of unmanned aerial systems into the national airspace system. Flight Standards has also developed initiatives in response to the six November 2012 recommendations of the Regulatory Consistency Committee, but the planned initiatives have not yet been released officially. This Committee was chartered to make recommendations to FAA to improve (1) the consistency in how regulations are applied in making certification decisions and (2) communications between FAA and industry stakeholders regarding such decisions. In late December 2014, FAA indicated that the draft plan to implement these recommendations was currently under review within FAA but the final plan is expected to be published by the end of January 2015, more than a year past the initial target publication date of December 2013. However, according to an October 2014 draft version of the plan that FAA provided to us, despite not having yet officially released the plan, FAA noted that it had closed 2 of the 6 recommendations and plans to complete the remaining four by July 1, 2016. FAA also noted that it had developed performance measures to measure the progress of the implementation of the remaining 4 recommendations. Table 1 provides a summary of the recommendations and FAA’s plans for addressing them, based on the October 2014 draft plan that FAA provided to us. We reported in 2013 that the Regulatory Consistency Committee took a reasonable approach in identifying the root causes of inconsistent interpretation of regulations, and its recommendations are relevant to the root causes, actionable, and clear.determine whether FAA’s planned actions adequately address the recommendations. In addition, FAA’s draft plan stated that the resources required to implement the recommendations must be balanced with other important FAA activities, such as agency priorities and existing rulemaking initiatives, and that if future priorities change, it may be forced However, it is too soon for us to to modify elements of this implementation plan. As we reported in July 2014, it will be critically important for FAA to follow through with its initiatives aimed at improving the consistency of its regulatory interpretation as well as its plans for developing performance metrics to track the achievement of intended consistencies. We have previously reported that large-scale change management initiatives—like those recommended by the regulatory consistency committee—require the concentrated efforts of both leadership and employees to realize intended synergies and accomplish new organizational goals. Further, industry representatives have continued to indicate a lack of communication with and involvement of stakeholders as a primary challenge for FAA in implementing the committees’ recommendations, particularly the regulatory consistency recommendations. FAA has noted that the processes for developing and updating its plans for addressing the certification process and regulatory consistency recommendations have been transparent and collaborative, and that FAA meets regularly with industry representatives to continuously update them on the status of the initiatives and for seeking their input. However, several industry —that representatives recently told us—and we reported in July 2014 FAA has not effectively collaborated with or sought input from industry stakeholders in the agency’s efforts to address the two sets of recommendations, especially the regulatory consistency recommendations. For instance, some stakeholders reported that FAA does not provide an opportunity for them to review and comment on the certification process implementation plan updates, and did not provide an opportunity for them to review and offer input on the regulatory consistency implementation plan. However, FAA did meet with various industry stakeholders in October 2014 to brief them on the general direction and high-level concepts of FAA’s planned actions to address each regulatory consistency recommendation. GAO-14-728T. Representatives of the selected 15 U.S. aviation companies we interviewed, as part of our ongoing work on foreign approvals, reported that their companies faced challenges related to process, communications, and cost in obtaining approvals from FCAAs. The processes involved included FCAAs’ individual approval processes as well as the processes spelled out in the relevant BASAs. FAA is making some efforts to address these challenges, such as by holding regular meetings with some bilateral partners and setting up forums in anticipation of issues arising. According to FAA data, from January 2012 through November 2014, U.S. companies submitted approximately 1,500 applications for foreign approvals to a total of 37 FCAAs. applications submitted to the top ten and other markets for foreign approvals from January 2012 through November 2014. The total includes Hong Kong, which is counted separately from China. Others include the following bilateral partners, in descending order of the number of applications submitted: South Korea, South Africa, Taiwan, New Zealand, Malaysia, Israel, and Singapore. The percentages are based on an approximation of the total number of submitted applications by U.S. aviation companies. According to FAA, the number of applications may be undercounted because there is no formal requirement for U.S. aviation companies to submit applications to FAA for foreign approvals unless the country is a FAA bilateral partner. Thus, some applications may not have been entered into FAA’s tracking system. Of the 15 companies we interviewed, representatives from 12 companies reported mixed or varied experiences with FCAAs’ approval processes, and 3 reported positive experiences. Thirteen companies reported challenges related to delays, 10 reported challenges with approval process length, and 6 reported challenges related to FCAA staffs’ lack of knowledge or uncertainty about the approval processes, including FCAA requests for data and information that, in the companies’ views, were not needed for approvals. Representatives of three companies stated that, in their opinion, the EU’s process is sometimes lengthy and burdensome, resulting in delays. Representatives of four companies noted examples of approval projects that, in their opinions, were expected to be granted within weeks or hours by FCAAs, in general, but instead took months or years. As an example, there were several media reports on the EU’s 4- year process for the approval of the Robinson R66 helicopter, which was reportedly awarded by EASA in May 2014. However, because we were not provided the relevant factors and circumstances that could have affected the delays in the specific examples provided, we did not assess whether the approvals took longer than necessary. Eight companies also noted that China often makes requests for data and detailed product design information that in their view is not necessary for an approval, and sometimes holds up approvals until those requests are fulfilled. The 737 MAX is Boeing’s newest family of single-aisle airplanes. It can accommodate up to 200 seats, and the first flight is scheduled in 2016 with deliveries to customers beginning in 2017. approvals, and is expected to be completed in fiscal year 2015.According to FAA officials, this IPA is also expected to reduce the level of involvement of the Civil Aviation Administration of China (CAAC) in conducting approvals and prevent its certification staff from doing extensive research for each approval project. Although representatives from 11 of the 15 U.S. companies and the 3 foreign companies we interviewed reported being satisfied with the overall effectiveness of having BASAs in place or with various aspects of the current BASAs, representatives of 10 U.S. companies reported challenges related to some BASAs lacking specificity and flexibility, 2 raised concerns that there is a lack of a formal dispute resolution process, and 1 noted a lack of a distinction between approvals of simple and complex aircraft. Companies suggested several ways to address these issues, including updating BASAs more often and making them clearer. FAA has taken action to improve some BASAs to better streamline the approval process that those countries apply to imported U.S. aviation products. For instance, according to FAA officials, they meet regularly with bilateral partners to address approval process issues and are working with these partners on developing a common set of approval principles. FAA also noted that there are basic dispute resolution clauses in most of the IPAs, and FAA is working toward adding specific dispute resolution procedures as contained in the agreement with the EU. FAA aims to complete negotiations to add a dispute resolution clause to the BASA with China in fiscal year 2015. FAA officials also indicated that they are working with longstanding bilateral partners—such as Brazil, Canada, and the EU—to identify areas where mutual acceptance of approvals is possible. Representatives from twelve U.S. companies reported challenges in communicating with FCAAs. Representatives from six U.S. companies reported, for example, that interactions with developing countries can be confusing and difficult because of language and cultural issues. Representatives from two companies noted that they hire local representatives as consultants in China to help them better engage CAAC staff with their approval projects and to navigate the CAAC’s process. One company’s representative also reported having better progress in communications with FCAAs in some Asian countries, such as India Japan, and Vietnam, when a local “third-party agent” (consultant) is involved because it provides a better relationship with the FCAAs’ staff. They added this requires a lot of trust that the local agent will support the best interests of the company, and, at times, this arrangement becomes difficult because the company’s experts would prefer to be in charge of the communications with FCAAs during the approval processes. Representatives from three companies also reported that, in general, some FCAAs often do not respond to approval requests or have no back- ups for staff who are unavailable. They noted that potential mitigations could include a greater FAA effort to develop and nurture relationships with FCAAs. According to FAA officials, they are working with the U.S.- China Aviation Cooperation Program to further engage with industry and Chinese officials. Representatives from twelve of the 15 U.S. companies and 2 of the 3 foreign companies indicated challenges with regard to approval fees charged by FCAAs. They specifically cited EASA and the Federal Aviation Authority of Russia (FAAR). For example, they noted that EASA’s fees are significantly high (up to 95 percent of the cost of a domestic EASA certification)—especially relative to the amount levied by other FCAAs—are levied annually, and are unpredictable because of the unknown amount of time it takes for the approval to be granted. The fees are based on the type of product being reviewed for approval and can range from a few thousand dollars to more than a million dollars annually. Representatives from two companies also noted that EASA lacks transparency for how the work it conducts to grant approvals aligns with the fees it levies for recovering its costs. FAA officials indicated to us that a foreign approval should take significantly less time and work to conduct than the work required for an original certification effort—roughly about 20 percent—and that they have initiated discussions with EASA officials about making a significant reduction in the fees charged to U.S. companies. Representatives of two companies also indicated that some FCAAs (e.g., China and Indonesia) routinely conduct site visits to the United States to, for example, review data and conduct test flights. According to the companies we interviewed, these visits are paid for by the U.S. companies seeking the approvals and the cost of these visits are unpredictable because the logistics and duration are determined by the FCAA. For example, representatives from one company told us that one FCAA typically conducts 2-week visits, but they only need one and a half days to provide information. Four companies’ representatives told us that they sometimes (1) offer to send their staff to the FCAA or another location because they can often do so less expensively or (2) schedule these site visits to better coincide with a more favorable budget environment for the company. As mentioned previously, FAA provides assistance to U.S. companies by facilitating the application process for foreign approvals of aviation products. U.S. companies seeking to export their aviation products to countries with BASAs in place submit application packages for foreign approvals to an appropriate ACO. ACO staff facilitates this process by reviewing the application package for completeness and to ensure that all country-specific requirements are met; and then forwarding the package along with an FAA cover letter to the applicable FCAA for review and approval. According to FAA officials, after the FCAA has reviewed the package, sometimes the authority will submit “certification review items”— which document issues related to the original certification of a product that requires an interpretation on how compliance was met or additional clarifications, or represents a major technical or administrative problem— to the responsible ACO for review and response. The assigned ACO staff reviews these items, determines whether a response is required from the applicant company, and coordinates the response to the FCAA. In some cases, ACO staff prepares issue papers which outline, among other things, the certification basis upon which the original type certification was issued. Also, according to FAA officials, FAA staff supports general and technical meetings between applicant companies and FCAAs for foreign approvals. According to FAA officials, the agency strives to make its process in place to support foreign approvals of aviation products as efficient as possible. In an effort to measure progress toward this goal, FAA has centrally tracked since January 2012 data on foreign approvals, including: the total number of foreign approval applications received and processed, the dates that applications are received by FAA, the dates packages are sent by FAA to the FCAA, and the date when the FCAA ultimately approves or finalizes the application. This data can be broken down by export country, applicant company and product type. As will be discussed later, however, FAA’s data on foreign approvals has some limitations. According to FAA staff in two ACOs, each field office is responsible for setting its own time goals related to processing foreign approvals. Officials in three field offices told us that their goal is for each foreign approval package to be forwarded to the FCAA within 30 days of receipt by FAA. FAA also collects other information about foreign approvals in an effort to assess its bilateral relationships and the overall effectiveness of its process. For example, for some foreign approval projects, FAA field staff must complete a Bilateral Relationship Management (BRM) form to provide feedback on the interaction with a FCAA, which is submitted to FAA headquarters. As we will further discuss later, however, FAA officials acknowledged some issues with the BRM process which they plan to address. Although FAA seeks to provide an efficient process, companies we interviewed reported challenges that they faced related to FAA’s role in the foreign approval process. FAA-related challenges cited by the companies we interviewed fell into three main categories: Process for facilitating foreign approvals. Most of the U.S. companies in our selection (12 out of 15) reported challenges related to FAA’s process for handling foreign approvals. These included concerns about foreign approvals not being a high enough priority for FAA staff, a lack of performance measures for evaluating BASAs, and an insufficient use of FAA’s potential feedback mechanisms. For example, representatives of three companies told us that sometimes FAA is delayed in submitting application packets to FCAAs because other work takes priority; one of these companies indicated that sometimes FAA takes several months to submit packets to FCAAs. In another example, representatives of four companies cited concerns that BASAs do not include any performance measures, such as any expectations for the amount of time that it will take for a company’s foreign approval to be finalized. With regard to FAA using feedback mechanisms to improve its process for supporting foreign approvals, representatives of one company told us that applicant companies are not currently asked for post-approval feedback by FAA even though it would be helpful in identifying common issues occurring with foreign approvals. Available resources. Most of the U.S. companies in our selection (10 out of 15) reported challenges related to the availability of FAA staff and other resources. These include limited FAA travel funds and limited FAA staff availability to process foreign approval applications. According to FAA officials, FAA is responsible for defending the original type certification and, more broadly, for handling any disputes that arise with FCAAs during the foreign approval process. In doing so, FAA is also responsible for working with a FCAA in an authority- to-authority capacity, and communications should flow through FAA to the applicant company. However, representatives of five companies noted that due to a lack of FAA travel funds, FAA staff is generally not able to attend key meetings between U.S. companies and FCAAs conducted at the beginning of the foreign approval process. These representatives noted that this can complicate the process for companies, which then have to take on a larger role in defending the original type certificate issued for a product. Representatives of two companies also noted that when there is limited FAA staff availability at the time a foreign approval application is received that it contributes to delays in obtaining their approvals. Industry stakeholders have continued to suggest that FAA should more thoroughly utilize its delegation authority in several areas to better utilize available FAA resources. In fact, the Certification Process Committee made recommendations to encourage FAA to include the expansion of delegation in its efforts for improving the efficiency of its certification process. FAA’s initiatives related to expanding the use of delegation appear to be moving in the right direction, but FAA’s efforts have been slower than industry would like and has expected. Staff expertise. Some of the U.S. companies in our selection (7 out of 15) reported issues related to FAA staff expertise. These cited issues included limited experience on the part of FAA staff in dispute resolution as well as limited expertise related to intellectual property and export control laws. For example, representatives of three companies told us that FAA staff sometimes lack technical knowledge due to having little to no experience with some aviation products, while a representative of another company argued that increased training for FAA staff in dispute resolution could be very helpful, especially for disputes involving different cultural norms. In another example, representatives of two companies described situations in which FAA staff was ready to share information with a FCAA that the applicant company considered proprietary until the company objected and other solutions were found. FAA has initiatives under way aimed at improving its process for supporting foreign approvals that may help address some of the challenges raised by the U.S. companies in our review. Specifically, FAA’s current efforts to increase the efficiency of its foreign approval process could help address reported challenges related to FAA’s process and its limited staff and financial resources. For example, FAA is planning to address its resource limitations by focusing on improving the efficiency of its process with such actions as increasing international activities to support U.S. interests in global aviation, and by implementing its 2018 strategic plan, which includes the possibility of allocating more resources to strengthening international relationships. FAA has also initiated efforts to improve the robustness of its data on foreign approvals, to in turn further improve the efficiency of its process for supporting these approvals. With more complete data, FAA aims to track performance metrics such as average timeframes for foreign approvals and to better evaluate FAA’s relationships with bilateral partners. As previously mentioned, in 2012, FAA started tracking data on foreign approval packages received and processed. In addition, according to FAA officials, FAA currently tracks the time needed from initial receipt of a foreign approval application by an ACO to the date the application is forwarded to the FCAA. However, currently, there is no formal written requirement for FAA field staff to enter foreign approval application information into the central tracking system, so not all applications are captured. FAA officials told us in December 2014 that the agency is developing formal requirements for field staff to enter data into this system, in order to ensure the integrity of data within its control, but they did not provide an expected time frame for completion. According to FAA staff in one field office, Aircraft Certification’s International Policy Office— which manages the central data system—recently updated this system with additional data fields to capture more data on the number of foreign approval projects in process and data for tracking performance metrics. As previously mentioned, FAA collects Bilateral Relationship Management (BRM) forms as a method for field staff to relay information on specific foreign approval projects—both positive and negative experiences—to headquarters. Based on discussions with us regarding policies related to BRM submissions, FAA officials told us that they plan to clarify BRM submission criteria and response policies for field and headquarters staff to enhance information gathered through this process. According to FAA, collecting, sharing, and taking appropriate action on information in BRM forms is necessary for FAA to recognize and resolve issues. Initially, FAA officials indicated that field staff is required to submit BRM forms whenever an employee meets with an official from a FCAA or foreign company, but that other issues can trigger the submission of BRM forms, such as when the FCAA is not adhering to the BASA, or is not actively engaged in certification activities. FAA officials also said that designated headquarters officials are required to respond to all BRM forms received within 48 hours. However, FAA officials at four ACOs we interviewed told us that field staff does not consistently submit BRM forms, and that when staff does submit BRM forms, field staff generally does not receive feedback from FAA headquarters about the information received in the form. For example, one ACO official indicated that his office’s staff is only likely to submit the BRM form when there is a significant issue regarding an ongoing foreign approval package, and not to report any positive outcomes or circumstances. Further, the official said that the Aircraft Certification’s International Policy Office does not provide feedback on issues raised in these forms. Two officials from a different ACO indicated that the submission of BRM forms varies greatly by project manager, with some managers submitting these routinely whereas others do not submit them at all; these officials also indicated that their staff do not typically receive feedback from headquarters on submitted forms. After hearing about these concerns about the BRM process raised by field staff, FAA headquarters officials indicated that they plan to clarify to field staff when BRM forms should be submitted and also clarify to designated headquarters staff that each BRM form requires feedback to the submitting field staff, but they did not provide an expected time frame for completion. These planned efforts should help improve the robustness and completeness of data shared in BRM forms. Some current FAA efforts to collect additional data on foreign approvals are aimed at improving FAA’s ability to evaluate its relationships with its bilateral partners; such efforts could help to address domestic challenges raised by companies about FAA not having performance metrics to assess how well BASAs are working. For example, according to FAA officials, in November 2013, Aircraft Certification formally expanded its process for evaluating international partners to include risk-based evaluation methods. Officials noted that this evaluation process includes gathering quantitative and qualitative information about the effectiveness of bilateral partnerships. Officials explained that FAA uses a structured process to evaluate and to establish a risk factor for each foreign bilateral partner, based on information in the BRM forms, the number of foreign approval projects the respective authority has within FAA’s system, and the authority’s most recent ICAO airworthiness score, among other factors. FAA officials said that this evaluation system will continue to expand as FAA identifies new data sources. In conclusion, to its credit, FAA has made some progress in addressing the Certification Process and Regulatory Consistency Committees’ recommendations, as well as in taking steps to address challenges faced by U.S. aviation companies in obtaining foreign approvals of their products. It will be critically important for FAA to follow through with its current and planned initiatives to increase the efficiency and consistency of its certification processes, and its efforts to address identified challenges faced by U.S. companies in obtaining foreign approvals. Given the importance of U.S. aviation exports to the overall U.S. economy, forecasts for continued growth of aviation exports, and the expected increase in FAA’s workload over the next decade, it is essential that FAA undertake these initiatives to ensure it can meet industry’s future needs. To demonstrate that it is making progress on these important initiatives, it is also important that FAA continue to develop and refine its outcome- based performance measures to determine what is actually being achieved through the current and future initiatives, and also through improvements to its data tracking for monitoring the effectiveness of its bilateral agreements and partnerships. Such outcome-based metrics will make it easier for FAA to determine the overall outcomes of its actions and relationships, hold field and headquarters staff accountable for the results, and demonstrate to industry stakeholders, congressional stakeholders, and others that progress is being made. Going forward, we will continue to monitor FAA’s progress, highlight the key challenges that remain, and the steps FAA and industry can take to find a way forward on the issues covered in this statement as well as other issues facing the industry. As we noted in our October 2013 statement, however, some improvements to the certification processes will likely take years to implement and, therefore, will require a sustained We are hopeful that our commitment as well as congressional oversight.findings related to previous and ongoing work in these areas will continue to assist this Committee and its Subcommittee on Aviation as they develop the framework for the next FAA reauthorization act. Chairman Shuster, Ranking Member DeFazio, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to questions at this time. For further information on this testimony, please contact Gerald L. Dillingham, Ph.D., at (202) 512-2834 or dillinghamg@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony statement include Vashun Cole, Assistant Director; Jessica Bryant-Bertail, Jim Geibel, Josh Ormond, Amy Rosewarne, and Pamela Vines. Other contributors included Kim Gianopoulos, Director; Dave Hooper; Stuart Kaufman, and Sara Ann Moessbauer. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
FAA issues certificates for new U.S.-manufactured aviation products, based on federal aviation regulations. GAO and industry stakeholders have questioned the efficiency of FAA's certification process and the consistency of its regulatory interpretations. As required by the 2012 FAA Modernization and Reform Act, FAA chartered two committees--one to improve certification processes and another to address regulatory consistency--that recommended improvements in 2012. FAA also assists U.S. aviation companies seeking approval of their FAA-certificated products in foreign markets. FAA has negotiated BASAs with many FCAAs to provide a framework for the reciprocal approval of aviation products. However, U.S. industry stakeholders have raised concerns that some countries conduct lengthy processes for approving U.S. products. This testimony focuses on (1) FAA's progress in implementing the certification process and regulatory consistency recommendations and (2) challenges selected U.S. companies face in obtaining foreign approvals. It is based on GAO products issued from 2010 to 2014, updated in January 2015 based on FAA documents, and preliminary observations from GAO's ongoing work on foreign approvals. This ongoing work includes an analysis of FAA data on approval applications submitted January 2012 through November 2014 and interviews of a nongeneralizable sample of 15 U.S. companies seeking foreign approvals, selected on the basis of the number of applications submitted and aviation product types manufactured. The Federal Aviation Administration (FAA) has made progress in addressing the Certification Process and the Regulatory Consistency committees' recommendations, but challenges remain and could affect successful implementation of the committees' recommendations. FAA is implementing its plan for completing 14 initiatives for addressing the 6 certification process recommendations. According to a January 2015 FAA update, 10 initiatives have been completed or are on track to be completed, whereas the rest are at risk of not meeting or will not meet planned milestones. FAA has developed plans for addressing the six regulatory consistency recommendations. In late December 2014, FAA officials indicated that the final plan to implement the recommendations is under agency review and is expected to be published in January 2015. According to a draft version of the plan, FAA closed two recommendations--one as not implemented and one as implemented in 2013--and plans to complete the remaining 4 by July 2016. While FAA has made some progress, it is too soon for GAO to determine whether FAA's planned actions adequately address the recommendations. However, industry stakeholders continue to indicate concerns regarding FAA's efforts. These concerns include a lack of communication with and involvement of stakeholders as FAA implements the two committees' recommendations. As part of its ongoing work, representatives of 15 selected U.S. aviation companies GAO interviewed reported facing various challenges in obtaining foreign approvals of their products, including challenges related to foreign civil aviation authorities (FCAA) as well as challenges related to FAA. Reported FCAA-related challenges related to (1) the length and uncertainty of some FCAA approval processes, (2) the lack of specificity and flexibility in some of FAA's bilateral aviation safety agreements (BASA) negotiated with FCAAs, (3) difficulty with or lack of FCAA communications, and (4) high fees charged by some FCAAs. Although FAA's authority to address some of these challenges related to FCAAs is limited, FAA has been addressing many of them. For example, FAA has created a certification management team with its three major bilateral partners to provide a forum for addressing approval process challenges, among other issues. FAA has also taken action to mitigate the challenges related to some BASAs by holding regular meetings with bilateral partners and adding dispute resolution procedures to some BASAs. Reported FAA-related challenges primarily involved (1) FAA's process for facilitating approval applications, which sometimes delayed the submission of applications to FCAAs; (2) limited availability of FAA staff for facilitating approval applications; and (3) lack of FAA staff expertise in issues unique to foreign approvals, such as intellectual property concerns and export control laws. FAA has initiatives under way to improve its process that may help resolve some of these challenges raised by U.S. companies. For example, FAA is making its approvals-related data more robust to better evaluate its relationships with bilateral partners. FAA is also addressing its resource limitations by taking actions to improve the efficiency of its process.
The National Wildlife Refuge System comprises the only federal lands managed primarily for the benefit of wildlife. The refuge system consists primarily of National Wildlife Refuges (NWR) and Waterfowl Production Areas and Coordination Areas. The first national wildlife refuge, Florida’s Pelican Island, was established by President Roosevelt in 1903 to protect the dwindling population of wading birds in Florida. As of July 1994, the system included 499 refuges in all 50 states and several U.S. territories and accounted for over 91 million acres. (See fig. 1.) The Fish and Wildlife Services’ (FWS) Division of Refuges provides overall direction for the management and operation of the National Wildlife Refuge System. Day-to-day refuge activities are the responsibility of the managers of the individual refuges. Because the refuges have been created under many different authorities, such as the Endangered Species Act (ESA) and the Migratory Bird Conservation Act, and by administrative orders, not all refuges have the same specific purpose or can be managed in the same way. The ESA was enacted in 1973 to protect plants and animals whose survival is in jeopardy. The ESA’s goal is to restore listed species so that they can live in self-sustaining populations without the act’s protection. As of April 1994, according to FWS, 888 domestic species have been listed as endangered (in danger of extinction) or threatened (likely to become endangered in the foreseeable future). The ESA directs FWS to emphasize the protection of listed species in its acquisition of refuge lands and in its operation of all refuges. Under the ESA, the protection, recovery, and enhancement of listed species are to receive priority consideration in the management of the refuges. FWS’ Division of Endangered Species provides overall guidance in the implementation of the ESA. FWS’ regions are generally responsible for implementing the act. Among other things, the act requires FWS to develop and implement recovery plans for all listed species, unless such a plan would not benefit the species. Recovery plans identify the problems threatening the species and the actions necessary to reverse the decline of a species and ensure its long-term survival. Recovery plans serve as blueprints for private, federal, and state interagency cooperation in taking recovery actions. Of all the listed species, 215, or 24 percent, occur on wildlife refuges.(See app. I for the listed species that occur on refuges.) Figure 2 shows the types of listed species found on refuges. As the figure shows, more than two-thirds of the species are plants, birds, and mammals. Fishes (27) Mammals (40) 9% Reptiles (19) Note 1: Percentages have been rounded. Note 2: The total number of species is 215. Note 3: “Other” includes amphibians (2), clams (6), crustaceans (1), insects (7), and snails (1). Some refuges represent a significant portion of a listed species’ habitat. According to FWS regional refuge officials, 66 refuges—encompassing a total of 26.7 million acres, including 22.6 million acres on two Alaska refuges—provide a significant portion of the habitat for 94 listed species. For example, Ash Meadows NWR in Nevada has 12 listed plants and animals that exist only at the refuge—the largest number of listed native species at one location in the United States. In addition, Antioch Dunes NWR in California contains virtually the entire remaining populations of three listed species—the Lange’s metalmark butterfly, the Antioch Dunes evening-primrose, and the Contra Costa wallflower. (App. II lists the refuges that provide a significant portion of a listed species’ habitat and the specific species that occur at these refuges.) Some listed species use the refuges on a temporary basis for migratory, breeding, and wintering habitat. As previously shown in figure 1, the refuges are often located along the primary north-south routes used by migratory birds. Migratory birds use the refuges as temporary rest-stops along their migration routes. The listed wood stork, for example, migrates in the spring from southern Florida to Harris Neck NWR in Georgia to nest in the refuge’s freshwater impoundments. In addition, several refuges provide breeding habitat for listed species. The Blackbeard Island and Wassaw refuges in Georgia and the Merritt Island NWR in Florida, for example, provide beach habitat for the listed loggerhead sea turtle to lay its eggs. Wildlife refuges and refuge staff contribute to the recovery of listed species in a variety of ways. Foremost, refuges provide secure habitat, which is often identified as a key component in the recovery of listed species. In addition, refuge staff carry out, as part of their refuge management activities, specific actions to facilitate the recovery of listed species. Refuge staff also participate in the development and review of recovery plans for listed species. One of the primary efforts for the recovery of listed species is to stabilize or reverse the deterioration of their habitat. Refuges contribute to the recovery of listed species by providing secure habitat. Our review of 120 recovery plans for listed species occurring on refuges disclosed that 80 percent of the plans identified securing habitat as an action needed to achieve species recovery. As of March 1994, the refuge system included about 91 million acres of wildlife habitat. FWS has acquired over 310,000 acres to create 55 new refuges specifically for the protection of listed species. FWS’ policy requires that a species recovery plan be prepared before lands are acquired for listed species. For example, the recovery plan for four Hawaiian waterbirds called for FWS to secure and manage a number of ponds and marshes that two or more of the waterbirds are known to use. One specific area described in the recovery plan, Kealia Pond, was subsequently acquired by FWS in 1992. However, overall we could not readily determine whether the acquisitions of lands for the 55 new refuges had been identified as needed acquisitions in species recovery plans. (App. III lists the refuges specifically established for listed species.) According to FWS’ data, listed species found on refuges, and specifically on refuges established to protect listed species, appear to have a more favorable recovery status than listed species that do not occur on refuges. Table 1 provides an overview of FWS’ data on the recovery status of listed species. This information was compiled on the basis of the knowledge and judgments of FWS staff and others familiar with the species. As the table shows, a greater proportion of the listed species that occur on refuges have a recovery status determined by FWS to be improving or stable than the listed species not found on refuges. According to FWS’ guidance, species whose recovery is improving are those species known to be increasing in number and/ or for which threats to their continued existence are lessening in the wild. Species whose recovery is stable are those known to have stable numbers over the recent past and for which threats have remained relatively constant or diminished in the wild. Declining species are those species known to be decreasing in number and/or for which threats to their continued existence are increasing in the wild. Refuge staff carry out a variety of activities that contribute to the recovery of listed species. According to FWS’ Refuges 2003: Draft Environmental Impact Statement, a total of 356 refuges had habitat management programs under way that directly benefited listed species. Refuge staff at the 15 refuges we visited were carrying out a number of specific actions in support of the protection and recovery of listed species. Such actions generally involved efforts to monitor the status of listed species’ populations at the refuges and carry out projects designed to restore and manage the habitats and the breeding areas of listed species. Examples of specific actions being taken included the following: Carrying out prescribed burning of vegetation at the Okefenokee NWR (Georgia). Among other things, such burning helps restore and facilitate the growth of longleaf pine trees—the primary habitat for the listed red-cockaded woodpecker. Enclosing nesting areas at the Salinas River NWR (California). The enclosures protect the listed western snowy plover’s nests and chicks from predation by red foxes. Undertaking protective actions at the Hakalau Forest NWR (Hawaii). Specifically, to protect and assist in the recovery of five listed forest birds, the refuge manager has restricted public use, fenced off the forest to keep out wild pigs and cattle, and created new nesting habitat for the listed birds by protecting indigenous plants and eliminating nonnative/exotic plants. Developing artificial nesting structures for wood storks at the Harris Neck NWR (Georgia). According to the refuge biologist, each structure at the refuge was occupied by up to three nests for these birds in both 1993 and 1994. Providing economic incentives to protect habitat and provide a food source for the listed bald eagle at Blackwater NWR (Maryland). Specifically, refuge management pays muskrat trappers to kill a rodent (the nutria) that is destroying the refuge wetlands. The carcasses are then left for bald eagles to eat. Managing vegetation growth to provide feeding pastures for the listed Columbian white-tailed deer at the Julia Butler Hansen Refuge for Columbian White-tailed Deer (Oregon and Washington). The vegetation in the deer’s feeding pastures is kept short by allowing cattle to graze on portions of refuge lands under cooperative agreements with local farmers. Refuge staff also participate on teams tasked with developing recovery plans for listed species. While the responsibility for developing and implementing the plans rests with FWS’ regional offices, recovery teams often include species experts from federal and state agencies (including the refuges), conservation organizations, and universities. For example, a biologist at the San Francisco Bay NWR is helping develop a revised recovery plan for the salt marsh harvest mouse, the California clapper rail (a species of bird), and other coastal California wetlands species. On the basis of their knowledge of the listed species, refuge staff are also asked to comment on draft recovery plans developed by others. For example, refuge staff at the Moapa Valley NWR in Nevada were asked to review the draft recovery plan for the Moapa dace (a species of fish) developed by a recovery team made up of representatives from a variety of organizations, including the Department of the Interior’s Bureau of Reclamation; the University of Nevada, Las Vegas; and the Nevada Division of Wildlife. Refuge staff at the locations we visited told us they use the recovery plans to guide their activities to protect listed species. They also told us that recovery plans are good reference tools and help outline the management actions necessary for species recovery. They noted, however, that recovery plans have their limitations—plans can become outdated quickly and that refuges often lack the funding necessary to undertake all of the prescribed recovery tasks. While refuge staff have taken some actions to protect and aid the recovery of listed species on their refuges, we found that efforts were at times not undertaken. According to refuge managers and staff, their ability to contribute to species recovery efforts are constrained by the level of available funding. Two 1993 Interior reports discussed overall concerns about refuge funding and concluded that refuge funding was inadequate to meet the missions of refuges. In its Refuges 2003: Draft Environmental Impact Statement, FWS reported that the refuge system’s current annual funding is less than half the amount needed to fully meet established objectives. From October 1, 1988, through fiscal year 1993, appropriations for the Division of Refuges increased from $117.4 million to $157.5 million per year. If the current level of annual funding continues, according to FWS, funding will be inadequate to address the existing backlog of major refuge maintenance projects or the programs and construction projects necessary for any expanded wildlife or public use activities. In addition, FWS stated that recent increases in refuge funding have not been sufficient to address the rising costs of basic needs, such as utilities, fuel, travel, and training. In August 1993, Interior’s Inspector General reported that “refuges were not adequately maintained because Service funding requests for refuge maintenance have not been adequate to meet even the minimal needs of sustaining the refuges.” According to the Inspector General, the maintenance backlog totaled $323 million as of 1992. The Inspector General also reported that “new refuges have been acquired with increased Service responsibilities, but additional sufficient funding was not obtained to manage the new refuges.” Between 1988 and 1992, according to the Inspector General, $17.2 million was necessary to begin operations at the 43 new refuges acquired during this period. However, only $4.7 million was appropriated for all new and expanded refuges. This appropriation level for refuge funding resulted in a $12.5 million deficit, according to the Inspector General, some of which contributed directly to the maintenance backlog. In response to the Inspector General’s findings, FWS has agreed to develop a plan to reduce refuges’ maintenance backlogs and to report on efforts to ensure consideration of the operations and maintenance costs in all future acquisitions. The budget resources are insufficient to undertake all of the efforts necessary to recover listed species, according to refuge managers. In general, refuge operations and maintenance budgets are earmarked for items such as salaries, utilities, and specific maintenance projects. As a consequence, many efforts to recover listed species are not being carried out. At 14 of the 15 locations we visited, refuge managers and staff said funding constraints limited their ability to fully implement recovery actions for listed species and other protection efforts. For example, refuge staff at the Savannah Coastal Refuge Complex in Georgia explained that they have enough resources to conduct only one survey of the bald eagle population per year, rather than the three they feel are necessary to adequately monitor the eagle’s status. A biologist at the San Francisco Bay Refuge Complex reported that no money is available to conduct genetic studies on the listed salt marsh harvest mouse, even though such studies are called for in the species recovery plan. In commenting on a draft of this report, the Assistant Secretary for Fish and Wildlife and Parks, Department of the Interior, generally concurred with the findings (app. IV contains Interior’s comments). In particular, the Assistant Secretary stated that funding limitations constrain the National Wildlife Refuge System’s ability to fully protect and recover endangered species; however, in light of other budgetary priorities, refuges have been funded at the highest affordable level. The Assistant Secretary also provided a number of comments that were technical in nature. In response, we revised the report, where appropriate, to refer to all components of the National Wildlife Refuge System rather than just the refuges and made other editorial changes. We conducted our work between May 1993 and July 1994 in accordance with generally accepted government auditing standards. To obtain information on FWS’ policies and procedures for refuges and implementation of the ESA, we reviewed relevant FWS documents, including the May 1990 Policy and Guidelines for Planning and Coordinating Recovery of Endangered and Threatened Species; the Refuge Manual; Refuges 2003: Draft Environmental Impact Statement; the 1990 and draft 1992 Report to Congress: Endangered and Threatened Species Recovery Program; and 120 species recovery plans. We also interviewed officials at the Division of Refuges and Division of Endangered Species at FWS headquarters and at the FWS Portland regional office. In addition, we visited and met with officials from 15 refuges—including refuges created specifically for listed species and those that were created for other purposes—to determine how each refuge contributed to recovery efforts for listed species. The 15 refuges included, in California, Antioch Dunes, San Francisco Bay, and San Pablo Bay; in Georgia, Harris Neck and Okefenokee; in Hawaii, Hanalei, Huleia, James C. Campbell, Kilauea Point, and Pearl Harbor; in Maryland, Blackwater; in Maryland and Virgina, Chincoteague; in Nevada, Ash Meadows, Moapa Valley; and in Oregon and Washington, Julia B. Hansen Columbian White-tailed Deer. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to the Secretary of the Interior; the Assistant Secretary, Fish and Wildlife and Parks; and the Director of the Fish and Wildlife Service. We will also make copies available to others on request. Please call me at (202) 512-7756 if you or your staff have any questions. Major contributors to this report are listed in appendix V. As of April 1994, the number of listed animal and plant species occuring on wildlife refuges totaled 215. As of June 30, 1994, recovery plans had been approved for 157 of these species (as indicated by an asterisk). Cambarus aculabrum (crayfish with no common name) *Cavefish, Ozark *Chub, bonytail *Chub, humpback Chub, Oregon Chub, Yaqui *Dace, Ash Meadows speckled *Dace, Moapa *Darter, watercress *Gambusia, Pecos Madtom, Pygmy Minnow, Rio Grande Silvery *Poolfish (killifish), Pahrump *Pupfish, Ash Meadows amargosa *Pupfish, Devils Hole *Pupfish, Warm Springs *Shiner, Pecos bluntnose *Squawfish, Colorado *Sucker, Lost River Sucker, razorback *Sucker, short-nose *Topminnow, Gila (including Yaqui) Aleutian Canada goose, Aleutian shield-fern Ozark cavefish, Gray bat, Indiana bat, Cambarus aculabrum (crayfish with no common name) Yaqui topminnow, Yaqui chub, Yaqui catfish, Beautiful shiner Lange’s metalmark butterfly, Contra Costa wallflower, Antioch Dunes evening primrose Lost River and short-nosed suckers Light-footed clapper rail, California least tern Light-footed clapper rail, California least tern Loggerhead, green, leatherback, and hawksbill sea turtles American crocodile, Key Largo cotton mouse, Key Largo woodrat Rice (silver rice) rat Loggerhead and green sea turtles (continued) Chincoteague (also in Virginia) Mississippi Sandhill Crane Mississippi sandhill crane (continued) Black-footed ferret (to be reintroduced) Julia Butler Hansen Refuge for Columbian White-tailed Deer (also in Washington) Chincoteague (also in Maryland) (continued) Julia Butler Hansen Refuge for Columbian White-tailed Deer (also in Oregon) Indiana bat, gray bat Indiana bat, gray bat Gila (Yaqui) topminnow, Yaqui chub, Peregrine falcon Gila (Yaqui) topminnow, Yaqui chub, Yaqui catfish, beautiful shiner Lange’s metalmark butterfly, Antioch Dunes evening-primrose, Contra Costa wallflower Valley elderberry longhorn beetle, bald eagle, least bell’s vireo California clapper rail, California least tern, salt marsh harvest mouse Light-footed clapper rail, California least tern Loggerhead and green sea turtles Loggerhead and green sea turtles (continued) Dusky seaside sparrow (extinct) Julia Butler Hansen Refuge for Columbian White-tailed Deer (also in Washington) (continued) Julia B. Hansen Refuge for Columbian White-tailed Deer (also in Oregon) Kim Gianopoulos The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Fish and Wildlife Service's (FWS) National Wildlife Refuge System, focusing on the extent to which wildlife refuges contribute to the protection and recovery of endangered species. GAO found that: (1) of about 900 endangered species, 215 occur or have habitat on national wildlife refuges; (2) the endangered species found on wildlife refuges represent a diversity of wildlife; (3) although many listed endangered species inhabit wildlife refuges, many other endangered species use refuge lands temporarily for breeding or migratory rest stops; (4) FWS refuges contribute to the protection and recovery of endangered species by providing safe and secure habitats, implementing recovery projects that are tailored to each endangered species, and identifying specific actions that can contribute to species recovery; (5) FWS efforts to manage wildlife refuges have been inhibited because funding levels have not kept pace with the increasing costs of managing new or existing refuges; and (6) at 14 of the 15 locations reviewed, refuge managers and staff believed that funding constraints limited their ability to enhance habitat and facilitate the recovery of endangered species.
In the United States, patients injured while receiving health care can sue health care providers for medical malpractice under governing state tort law, usually the law of the state where the injury took place. Laws governing medical malpractice vary from state to state, but among the goals of tort law are compensation for the victim and deterrence of malpractice. Nearly all health care providers buy medical malpractice insurance to protect themselves from potential claims that could cause financial harm or even bankruptcy absent liability coverage. For example, the average reported claims payment made on behalf of physicians and other licensed health care practitioners in 2001 was about $300,000 for all settlements, and about $500,000 for trial verdicts. Under a malpractice insurance contract, the insurer agrees to investigate claims, to provide legal representation for the health care provider, and to accept financial responsibility for payment of any claims up to a specified monetary level during an established time period. The most common policies sold by insurers provide $1 million of coverage per incident and $3 million of total coverage per year. The insurer provides this coverage in return for a fee— the medical malpractice premium. Medical malpractice premium rates differ widely by medical specialty and geography. Premiums paid by traditionally high-risk specialties, such as obstetrics, are usually higher than premiums paid by other specialties, such as internal medicine. Premium rates also vary across and within states. Across states, for example, a large insurer in Minnesota charged base premium rates of $3,803 for the specialty of internal medicine, $10,142 for general surgery, and $17,431 for OB/GYN in 2002 across the entire state. In contrast, a large insurer in Florida charged base premium rates in Dade County of $56,153 for internal medicine, $174,268 for general surgery, and $201,376 for OB/GYN, and $34,556, $107,242, and $123,924, respectively, for these same specialties in Palm Beach County. In addition to the wide range in premium rates charged, the extent to which premiums increase over time also varies by specialty and geographic area. Beginning in the late 1990s, malpractice premiums began to increase at a rapid rate for most, but not all, physicians in some states. For example, between 1999 and 2002, the Minnesota insurer increased its base premium rates by about 2 percent for each of the three specialties, in contrast to the Florida insurer that increased its base premium rates by about 98, 75, and 43 percent, respectively, for the three specialties in Dade County. Since 1999, medical malpractice premium rates for certain physicians in some states have increased dramatically. In a related report issued in June 2003, we examined the extent and causes of these recent increases. More specifically, we reported on (1) the extent of increases in medical malpractice insurance rates in seven states, (2) factors that have contributed to the increases, and (3) changes in the medical malpractice insurance market that may make the current period of rising premium rates different from earlier periods of rate hikes. Key findings from that report include the following. Among the seven states we analyzed, the extent of medical malpractice premium increases varied greatly not only from state to state but across medical specialties. For example, among the largest writers of medical malpractice insurance in the seven states, increases in base premium rates for general surgeons from 1999 to 2002 ranged from 2 percent in Minnesota to 130 percent in and around Harrisburg, Pennsylvania. Across specialties, one carrier raised premiums for the area in and around El Paso, Texas, during this period by 95 percent for general surgery, 108 percent for internal medicine, and 60 percent for OB/GYN. Multiple factors have contributed to the recent increases in medical malpractice premium rates. First, since 1998, the greatest contributor to increased premium rates in the seven states we analyzed appeared to be increased losses for insurers on paid medical malpractice claims. However, a lack of comprehensive data at the national and state levels on insurers’ medical malpractice claims and the associated losses prevented us from fully analyzing the composition and causes of those losses. Second, from 1998 through 2001, medical malpractice insurers experienced decreases in their investment income as interest rates fell on the bonds that generally make up around 80 percent of these insurers’ investment portfolios. While almost no medical malpractice insurers experienced net losses on their investment portfolios over this period, a decrease in investment income meant that income from insurance premiums had to cover a larger share of insurers’ costs. Third, during the 1990s, insurers competed vigorously for medical malpractice business, and several factors, including high investment returns, permitted them to offer prices that, in hindsight for some insurers, did not completely cover their ultimate losses on that business. As a result of this, some companies became insolvent or voluntarily left the market, reducing the downward competitive pressure on premium rates that had existed through the 1990s. Fourth, beginning in 2001, reinsurance rates for medical malpractice insurers also increased more rapidly than they had in the past, raising insurers’ overall costs. While the medical malpractice insurance market as a whole had experienced periods of rapidly increasing premium rates in the mid-1970s and mid-1980s, the market has changed considerably since then. These changes are largely the result of actions insurers, health care providers, and states have taken to address increasing premium rates. Beginning in the 1970s and 1980s, insurers began selling “claims-made” rather than “occurrence-based” policies, enabling insurers to better predict losses for a particular year. Also in the 1970s, physicians, facing increasing premium rates and the departure of some insurers, began to form mutual nonprofit insurance companies. Such companies, which may have some cost and other advantages over commercial insurers, now make up a significant portion of the medical malpractice insurance market. More recently, an increasing number of large hospitals and groups of hospitals or physicians have left the traditional commercial insurance market and sought alternative arrangements, for example, by self-insuring. While such arrangements can save money on administrative costs, hospitals and physicians insured through these arrangements assume greater financial responsibility for malpractice claims than they would under traditional insurance arrangements and thus may face a greater risk of insolvency. Finally, since the periods of increasing premium rates during the mid- 1970s and mid-1980s, all states have passed at least some laws designed to reduce medical malpractice premium rates. Some of these laws are designed to decrease insurers’ losses on medical malpractice claims, while others are designed to more tightly control the premium rates insurers can charge. These market changes, in combination, make it difficult to predict how medical malpractice premiums might behave in the future. In order to improve the affordability and availability of malpractice insurance and to reduce liability pressure on providers, states have adopted varying types of tort reform legislation. Tort reforms are generally intended to limit the number of malpractice claims or the size of payments in an effort to reduce malpractice costs and insurance premiums. Also, some believe tort reforms can lower overall health care costs by reducing certain defensive medicine practices. Such practices include the overutilization by physicians of certain diagnostic tests or procedures primarily to reduce their exposure to malpractice liability, therefore adding to the costs of health care. State tort reform measures adopted during the past three decades include placing caps on the amount that may be awarded to plaintiffs for damages in a malpractice lawsuit, including noneconomic, economic, and punitive damages; abolishing the “collateral source rule” that prevents a defendant from introducing evidence that the plaintiff’s losses and expenses have been paid in part by other parties such as health insurers, or damage awards from being reduced by the amount of any compensation plaintiffs receive from third parties; abolishing “joint and several liability” to ensure that damages are recovered from defendants in proportion to each defendant’s degree of responsibility, not each defendant’s ability to pay; allowing damages to be paid in periodic installments rather than in a lump placing limits on fees charged by plaintiffs’ lawyers; imposing stricter statutes of limitations that shorten the time injured parties have to file a claim in court; establishing pretrial screening panels to evaluate the merits of claims before proceeding to trial; and providing for greater use of alternative dispute resolution systems, such as arbitration panels. Among the tort reform measures enacted by states, caps on noneconomic damage awards that include pain and suffering have been the focus of particular interest. Cap proponents believe that such limits can result in several benefits that help reduce malpractice insurance premiums, such as helping to prevent excessive awards and overcompensation and ensuring more consistency among jury verdicts. In contrast, cap opponents believe that factors other than award amounts affect premiums charged by malpractice insurers and that caps can result in undercompensation for severely injured persons. Congress is currently considering federal tort reform legislation that includes several elements adopted by states to varying degrees, including placing caps on noneconomic and punitive damages, allowing evidence at the trial of a plaintiff’s recovery from collateral sources, abolishing joint and several liability, and placing a limit on contingency fees, among others. Actions taken by health care providers in response to rising malpractice premiums have contributed to reduced access to specific services on a localized basis in the five states reviewed with reported problems. We confirmed instances where physician actions in response to malpractice pressures have resulted in decreased access to services affecting emergency surgery and newborn deliveries in scattered, often rural areas of the five states. However, we also determined that many of the reported physician actions and hospital-based service reductions were not substantiated or did not widely affect access to health care. For example, our analysis of Medicare utilization data suggests that reported reductions in certain high-risk services, such as some orthopedic surgeries and mammograms, have not widely affected consumer access to these services. To help avoid consumer access problems, some hospitals we contacted have taken certain steps, such as assuming the costs of physicians’ liability insurance, to enable physicians to continue practicing. We confirmed examples in each of the five states where access to services affecting emergency surgery and newborn deliveries has been reduced. In these instances, some of which were temporary, patients typically had to travel farther to receive care. The problems we confirmed were limited to scattered, often rural, locations and in most cases providers identified long-standing factors in addition to malpractice pressures that affected the availability of services. Florida: Among several potential access problems we reviewed in Florida, the most significant appeared to be the reduction in ER on-call surgical coverage in Jacksonville. We confirmed that at least 19 general surgeons who serve the city’s hospitals took leaves of absence beginning in May 2003 when state legislation capping noneconomic damages for malpractice cases at $250,000 was not passed. According to one hospital representative, the loss of these surgeons reduced the general surgical capacity of Jacksonville’s acute care community hospitals by one-third. The administrator of the practice that employs these surgeons told us that at least 8 are seeking employment in other states to avoid the high malpractice premiums in Florida. Hospital officials in Jacksonville told us that other providers, including some orthopedic surgeons and cardiovascular surgeons, had also taken leave as of May 2003 due in part to the risks associated with practicing without surgeons available in the ER for support in the event of complications. According to one Jacksonville area hospital official, her hospital has lost the services of 75 physicians in total due to leaves of absence taken by the physicians. Hospital and local health department officials said that the losses of surgeons have caused a reduction in ER on-call surgical coverage at most acute care hospitals in the city; the health department official said patients requiring urgent surgical care presenting at an ER that does not have adequate capacity must be transferred to the nearest hospital that does, which could be up to 30 miles away. Within the first 11 days after most of the physicians took leave, 120 transfers took place. Although the hospital officials we interviewed expected that some of the physicians would eventually return to work, they believe timing may depend on passage of malpractice reform legislation during a special legislative session expected to take place this summer. Mississippi: Reductions in ER on-call surgical coverage and newborn delivery services have created access problems in certain areas of Mississippi. We confirmed that some surgeons along the Gulf Coast who formerly provided on-call services at multiple hospitals are restricting their coverage to a single ER and others are eliminating coverage entirely in an effort to minimize their malpractice premiums and exposure to litigation. Officials of two of five hospitals we spoke with in the three Gulf Coast counties told us they have either completely lost or experienced reduced ER on-call surgical coverage for certain services. These reductions in coverage may require that patients be transferred greater distances for services. Some family practitioners and OB/GYNs have stopped providing newborn delivery services, creating access problems in certain rural communities. An official from one hospital in a largely rural county in central Mississippi told us that it closed its obstetrics unit after five family practitioners who attended deliveries stopped providing newborn delivery services in order to avoid a more than 65 percent increase in their annual premium rates. Pregnant women in the area now must travel about 65 miles to the nearest obstetrics ward to deliver. Loss of obstetrics providers in other largely rural areas may require pregnant women in these areas to travel farther for deliveries. A provider association official told us that malpractice pressures have worsened long- standing difficulties associated with recruiting physicians to the state, and providers also said that low Medicaid reimbursement rates and insufficient reimbursement for trauma services also influence physician practice decisions. Nevada: Reductions in ER on-call surgical coverage have created access problems in Clark County. To draw attention to their concerns about rising medical malpractice premiums, over 60 orthopedic surgeons in the county withdrew their contracts with the University of Nevada Medical Center, causing the state’s only Level I trauma center to close for 11 days in July 2002. The center reopened after a special arrangement was made for surgeons to temporarily obtain malpractice coverage through the Medical Center and the governor announced his support for state tort reform, prompting the return of approximately 15 of the surgeons, according to medical center staff. Another hospital in the county has closed its orthopedics ward and no longer provides orthopedic surgical coverage in its ER as orthopedic surgeons have sought to reduce their malpractice exposure by decreasing the number of hospitals in which they provide ER coverage, according to a hospital official. Clark County has had long-standing problems with ER staffing due in part to its rapidly growing population, according to providers. Pennsylvania: Some areas in Pennsylvania have experienced reductions in access to emergency surgical services and newborn delivery services. For example, one rural hospital recently lost three of its five orthopedic surgeons. As a result, orthopedic on-call coverage in its ER has declined from full-time to only one-third of each month. At the same hospital, providers reported that four of the nine OB/GYNs who provide obstetrical care in two counties stopped providing newborn delivery services because their malpractice premiums became unaffordable and another left the state to avoid high premiums. Some pregnant women now travel an additional 35 to 50 miles to deliver. According to a hospital official, the remaining four OB/GYNs were each in their sixties and near retirement. This hospital reported that the loss of the physicians was largely due to the rising cost of malpractice insurance, but also identified the hospital’s rural location, and the area’s large Medicaid population and low Medicaid reimbursement rates as factors contributing to the physicians’ decisions to leave. Trauma services in Pennsylvania have also been affected in some localities. For example, a suburban Philadelphia trauma center closed for 13 days beginning in December 2002 because its orthopedic surgeons and neurosurgeons reported they could not afford to renew their malpractice insurance. The situation was resolved when a new insurance company offered more affordable coverage to the surgeons and the governor introduced a plan to reduce physician payments to the state medical liability fund, according to a hospital official. West Virginia: Access problems due to malpractice concerns in West Virginia involved ER specialty surgical services. One of the state’s major medical centers lost its Level I trauma designation for approximately 1 month in the early fall of 2002 due to reductions in the number of orthopedic surgeons providing on-call coverage. During this time, patients who previously would have been treated at this facility had to be transferred to other facilities at least 50 miles away. The hospital’s Level I designation was restored when additional physicians agreed to provide on- call coverage after the state extended state-sponsored liability insurance coverage to physicians who provide a significant percentage of their services in a trauma setting. The state’s northern panhandle lost all neurosurgical services for about 2 years when three neurosurgeons who served the area either left or stopped providing these services in response to malpractice pressures, requiring that all patients needing neurosurgical care be transferred 60 miles or more, limiting patients’ access to urgent neurosurgical care. Full-time neurosurgical coverage was restored to the area in early 2003 through an agreement with a group of neurosurgeons at one of the state’s major academic medical centers. A hospital official from this area reported that efforts to recruit a permanent full-time neurosurgeon have been unsuccessful. Provider groups told us that malpractice concerns have made efforts to recruit and retain physicians more difficult; however, they also identified the rural location, low Medicaid reimbursement rates, and the state’s provider tax on physicians as factors that have made it difficult to attract and retain physicians. Despite some confirmed reductions in ER on-call surgical coverage and newborn delivery services that were related to physicians’ concerns about malpractice pressures and affected access to health care, we also identified reports of provider actions taken in response to malpractice pressures—such as reported physician departures and hospital unit closures—that were not substantiated or that did not widely affect access to health care. Our contacts with 49 hospitals revealed that although 26 confirmed a reduction in surgeons available to provide on-call coverage for the ER, 11 of these reported that the decreases had not prevented them from maintaining the full range of ER services and 3 reported that the surgeons had returned or replacements had been found. Hospital association representatives reported that access to newborn delivery services in Florida had been reduced due to the closures of five hospital obstetrics units. However, we contacted each of these hospitals and determined that these units were located in five separate urban counties, and each hospital reported that demand for its now closed obstetrics facility had been low and that nearby facilities provided obstetrics services. In West Virginia, although access problems reportedly developed because two hospital obstetrics units closed due to malpractice pressures, officials at both of these hospitals told us that a variety of factors, including low service volume and physician departures unrelated to malpractice, contributed to the decisions to close these units. One of the hospitals has recently reopened its obstetrics unit. Provider groups also asserted that some physicians in each of the five states are moving, retiring, or closing practices in response to malpractice pressures. In the absence of national data reporting physician movement among states related to malpractice concerns, we relied on state-level assertions of departures that were based on a variety of sources, including survey results, information compiled and quantified by provider groups, and unquantified anecdotal reports. (See table 1.) Although some reports have received extensive media coverage, in each of the five states we found that actual numbers of physician departures were sometimes inaccurate or involved relatively few physicians. Reports of physician departures in Florida were anecdotal, not extensive, and in some cases we determined them to be inaccurate. For example, state medical society officials told us that Collier and Lee counties lost all of their neurosurgeons due to malpractice concerns; however, we found at least five neurosurgeons currently practicing in each county as of April 2003. Provider groups also reported that malpractice pressures have recently made it difficult for Florida to recruit or retain physicians of any type; however, over the past 2 years the number of new medical licenses issued has increased and physicians per capita has remained unchanged. In Mississippi, the reported physician departures attributed to recent malpractice pressures were scattered throughout the state and represented 1 percent of all physicians licensed in the state. Moreover, the number of physicians per capita has remained essentially unchanged since 1997. In Nevada, 34 OB/GYNs reported leaving, closing practices, or retiring due to malpractice concerns; however, confirmatory surveys conducted by the Nevada State Board of Medical Examiners found nearly one-third of these reports were inaccurate—8 were still practicing and 3 stopped practicing due to reasons other than malpractice. Random calls we made to 30 OB/GYN practices in Clark County found that 28 were accepting new patients with wait-times for an appointment of 3 weeks or less. Similarly, of the 11 surgeons reported to have moved or discontinued practicing, the board found 4 were still practicing. In Pennsylvania, despite reports of physician departures, the number of physicians per capita in the state has increased slightly during the past 6 years. The Pennsylvania Medical Society reported that between 2002 and 2003, 24 OB/GYNs left the state due to malpractice concerns; however, the state’s population of women age 18 to 40 fell by 18,000 during the same time period. Departures of orthopedic surgeons comprise the largest single reported loss of specialists in Pennsylvania. Despite these reported departures, the rate of orthopedic surgeries among Medicare enrollees in Pennsylvania has increased steadily for the last 5 years, as it has nationally. (See fig. 1.) In West Virginia, provider groups did not provide us with specific numbers of physician departures, but did offer anecdotal reports of physicians who have moved out of state or left practice. Despite these reports, the number of physicians per capita increased slightly between 1997 and 2002. Some providers in each of the five states also reported that physicians have recently cut back on certain services they believe to be high risk to reduce their malpractice insurance premiums or exposure to litigation. Evidence was based on surveys conducted by state and national medical and specialty provider groups and anecdotal reports by state provider groups, generally between 2001 and 2002. The most frequently cited service reductions included spinal surgeries and joint revisions and repairs (all five states), mammograms (Florida and Pennsylvania), and physician services in a nursing home setting (Florida and Mississippi). Survey data used to identify service cutbacks in response to physician concerns about malpractice pressures are not likely representative of the actions taken by all physicians. Most surveys had low response rates— typically 20 percent or less. Moreover, surveys often did not identify any one specific service as widely affected or identified service reductions in a nonspecific manner. For example, in responding to one recent survey, neurologists reported reducing 12 different types of services; however, the most widely reported reduction for any one service type was reported by fewer than 4 percent of respondents. AMA recently reported that about 24 percent of physicians in high-risk specialties responding to a national survey have stopped providing certain services; however, the response rate for this survey was low (10 percent overall), and AMA did not identify the number of responses associated with any particular service. Our analysis of utilization rates among Medicare beneficiaries for three of the specific services frequently cited as being reduced—spinal surgery, joint revisions and repairs, and mammography—did not identify recent reductions. For example, utilization of spinal surgeries among Medicare beneficiaries in the five states generally increased from July 2000 through June 2002, and is currently higher than the national average. (See fig. 2.) Utilization of joint revision and repair services among Medicare beneficiaries in the five states is slightly below, but has generally tracked the national average and has not recently declined. (See fig. 3.) Contrary to reports of reductions in mammograms in Florida and Pennsylvania, our analysis showed that utilization of these services among Medicare beneficiaries is higher than the national average in both Florida, where utilization rates have recently increased, and in Pennsylvania, where the pattern of utilization has not recently changed. (See fig. 4.) We also contacted selected hospitals and mammography facilities reported to have had problems in these two states and found that the longer wait times cited by provider organizations were more likely due to causes other than malpractice pressures. Although data limitations preclude an analysis of physician services in a nursing home setting, interviews with industry representatives did not reveal widespread reductions of services provided in these facilities. Nursing home representatives in all five states reported that facilities are facing increasing malpractice pressures due to higher premiums or decreased availability of coverage and in two states reported that these pressures are causing some physicians to stop providing services in these facilities. However, they also told us that residents still receive needed physician services. Some health care providers have taken certain actions to avoid access problems in the face of malpractice-related pressures. Several hospital officials we contacted reported they are assuming physicians’ liability insurance costs to avoid any access problems related to malpractice pressures. Officials in 9 of 49 hospitals contacted in the five states reported that, in order to retain needed staff, they have either hired physicians as direct employees, thereby covering their malpractice insurance premiums in full, or provided them with partial premium subsidies. An unpublished survey completed by The Hospital & Healthsystem Association of Pennsylvania found that 5 of 89 hospitals or health systems responding had taken these measures to maintain adequate staffing. An official at a small hospital in a largely rural Mississippi county told us that the hospital recently hired six family practitioners who provide all of its obstetrics services in order to assume their liability insurance costs and prevent loss of these services after the physicians’ premiums increased significantly. An official at a West Virginia hospital reported that increasing numbers of newly recruited physicians are coming to the area as direct employees of hospitals. In addition, where allowed by state law, some providers are going without malpractice insurance coverage. For example, a provider group in Mississippi reported that increasing numbers of nursing homes are going without coverage for some period of time because insurers are not renewing their policies or are raising premiums to rates that are unaffordable. According to an official from one insurer of Mississippi nursing homes, more than 40 homes statewide were without coverage at some point during 2002 as compared to fewer than 5 homes in 2001. Similarly, while Florida law does not require that physicians carry malpractice insurance, hospitals may impose such a requirement on affiliated physicians. One hospital contacted in the state told us it has loosened this requirement in response to physicians’ concerns over increasing malpractice premiums. Several recently published surveys report that physicians practice defensive medicine in response to malpractice pressures. In addition, most published studies designed to measure the prevalence of and costs associated with such practices generally conclude that physicians practice defensive medicine in specified circumstances and that doing so raises health care costs. However, because the surveys generally had low response rates and were not precise in measuring the prevalence of these practices, and because the studies examined physician practice behavior in only narrowly specified clinical situations, the results cannot be used to reliably estimate the overall prevalence or costs of defensive medicine practices. Physicians responding to surveys reported that they practice defensive medicine to varying extents, but low response rates and imprecise measurements of defensive medicine practices preclude generalizing these responses to all physicians. For example, a 2003 AMA survey found that, of the 30 percent of responding physicians who reported recently referring more complex cases to specialists, almost all indicated that professional liability pressures were important in their decision; and an April 2002 survey conducted by the American Academy of Orthopaedic Surgeons found that, of the 48 percent of responding orthopedists who reported that the costs of malpractice insurance caused them to alter their practice, nearly two-thirds reported ordering more diagnostic tests. However, the response rates for the AMA and AAOS surveys were about 10 and 15 percent, respectively, raising questions about how representative these responses were of all physicians nationwide. Another 2002 survey of 300 physicians conducted by a polling firm found that, due to concerns about medical malpractice liability, 79 percent of respondents reported ordering more tests, 74 percent reported referring patients to specialists more often, and 41 percent reported prescribing more medications than they otherwise would based only on medical necessity. However, these survey results do not indicate whether the respondents practice the cited defensive behaviors on a daily basis or only rarely, or whether they practice them with every patient or only with certain types of patients. Officials from AMA and several medical, hospital, and nursing home associations in the nine states we reviewed told us that defensive medicine exists to some degree, but that it is difficult to measure; and officials cited surveys and published research but could not provide additional data demonstrating the extent and costs associated with defensive medicine. Some officials pointed out that factors besides defensive medicine concerns also explain differing utilization rates of diagnostic and other procedures. For example, a Montana hospital association official said that revenue-enhancing motives can encourage the utilization of certain types of diagnostic tests, while officials from Minnesota and California medical associations identified managed care as a factor that can mitigate defensive practices. According to some research, managed care provides a financial incentive not to offer treatments that are unlikely to have medical benefit. Most research that has attempted to measure defensive practices has examined physician practices under specific clinical situations. For example, based on clinical scenario surveys, records review, and a synthesis of prior research, a 1994 study concluded that the percentage of diagnostic procedures related to defensive medicine practices is higher in specific clinical situations, such as the management of head injuries in ERs and cesarean deliveries in childbirth, but lower when measured across multiple procedures. The same study also surveyed physicians about nine hypothetical clinical scenarios likely to encourage defensive medicine practices and found the share of physicians reporting taking at least one clinical action primarily out of concern about malpractice varied widely depending on the situation—from 5 percent for back pain to 29 percent for head trauma. A more recent 1999 study that used records review found that reduced malpractice premiums for OB/GYNs were related to a statistically significant but small decrease in the rate of cesarean sections performed for some groups of mothers, a procedure researchers believe to be influenced by physicians’ concerns about malpractice liability. Some studies have also concluded that certain tort reforms may reduce defensive medicine as evidenced by slower growth in health care expenditures; however, these studies have not fully considered the range of factors that can influence medical spending. For example, a 1996 study using records review found that for a population of elderly Medicare patients treated for acute myocardial infarction or ischemic heart diseases, certain tort reforms led to reductions of 5 to 9 percent in hospital expenditures. However, this study did not control for other factors that can affect hospital costs, such as the extent of managed care penetration in different areas. When controlling for managed care penetration in a 2000 follow-up study, the same researchers found that the reductions in hospital expenditures attributable to direct tort reforms dropped to about 4 percent. Moreover, preliminary findings from a 2003 study that replicated and expanded the scope of these studies to include Medicare patients treated for a broader set of conditions failed to find any impact of state tort laws on medical spending. Appendix III summarizes the methods, findings, and limitations of published studies examining defensive medicine. Although available research suggests that defensive medicine may be practiced in specific clinical situations, the findings are limited and cannot be generalized to estimate the prevalence and costs of defensive medicine nationwide. Because the studies focused on specific clinical circumstances and populations, even slight changes in these scenarios could yield significant changes in the degree of defensive medicine practices identified. Consequently, reports that use the results of these studies to estimate defensive medicine practices and costs nationally are not reliable. For example, recent reports by the U.S. Department of Health and Human Services (HHS) applied the 5 to 9 percent hospital cost savings estimate for Medicare heart patients to total national health care spending to estimate the total defensive medicine savings that could result if federal tort reforms were enacted. Because the 5 to 9 percent savings only applies to hospital costs for elderly patients treated for two types of heart disease, the savings cannot be generalized across all services, populations, and health conditions. Premium rates reported for the physician specialties of general surgery, internal medicine, and OB/GYN—the only specialties for which data were available—were relatively stable on average in most states from the mid- to late 1990s and then began to rise, but more slowly among states with certain noneconomic damage caps. Malpractice claims payments against all physicians between 1996 and 2002 also tended to be lower and grew less rapidly on average in states with these damage caps than in states with limited reforms; however, these averages obscured wide variation between states in any given year and for individual states from year to year. Like the premium rate data, these claims payment data do not depict the experience of all providers; they exclude institutional providers such as hospitals and nursing homes, for which comprehensive data were not available. Moreover, differences in both premiums and claims payments are also affected by multiple factors in addition to damage caps, and we could not determine the extent to which differences among states were attributable to the damage caps or to additional factors. The average medical malpractice premium rates across the three specialties reported by MLM (general surgery, internal medicine, and OB/GYN) remained relatively stable during the mid- to late-1990s. From 1996 to 2000, average premium rates for all states changed little, as did average premium rates for states with certain caps on noneconomic damages and states with limited reforms, increasing or decreasing annually by no more than about 5 percentage points on average. After 2000, premium rates began to rise across most states on average, but more slowly among the states with certain noneconomic damage caps. In particular, from 2001 to 2002, the average rates of increase in the states with noneconomic damage caps of $250,000 and $500,000 or less were 10 and 9 percent, respectively, compared to 29 percent in the states with limited reforms. (See fig. 5.) The recent increases in premium rates were also lower for each reported physician specialty in the states with these noneconomic damage caps. From 2001 to 2002, the average rates of premium growth for each specialty in the states with these noneconomic damage caps were consistently lower than the growth rates in the limited reform states. (See fig. 6.) In addition to including rates for only three specialties, premium rates reported by MLM are subject to other limitations. First, because MLM relies on a voluntary survey, its data do not include all insurers that provide coverage in each state. Certain companies that may have a large market share in a particular state may not be included. MLM estimates that its 2002 survey may exclude about one-third of the total malpractice insurance market nationwide. Second, insurers that do report rates have not consistently done so across all the years, or have not consistently reported premiums in different geographic areas within each state. We generally excluded data from insurers that did not consistently report premium rates across most of the years studied. Third, premium rates do not reflect discounts, premium offsets, or rebates that may effectively reduce the actual premium rate, or surcharges that are assessed in certain states for physician participation in mandatory state-funded insurance programs. These surcharges can range from a small amount to more than the base premium rate. Other studies have found a relationship between direct tort reforms that include noneconomic damage caps and lower rates of growth in premiums. For example, in a recent analysis of malpractice premiums in states with and without certain medical malpractice tort limitations, the Congressional Budget Office (CBO) estimated that certain caps on damage awards in combination with other elements of proposed federal tort reform legislation would effectively reduce malpractice premiums on average by 25 to 30 percent over the 10-year period from 2004 through 2013. A 1997 study that assessed physician-reported malpractice premiums from 1984 through 1993 found that direct reforms, including caps on damage awards, lowered the growth in malpractice premiums within 3 years of their enactment by approximately 8 percent. Average per capita payments for claims against all physicians tended to be lower on average in states with noneconomic damage caps than in states with limited reforms. From 1996 through 2002, the average per capita payments were $10 for states with these damage caps compared with $17 for states with limited reforms. Within these averages, however, were wide variations among states. For example, in 2002 the per capita claims payments among states with these caps ranged from $4 to $16, compared with $3 to $33 among states with limited reforms. In addition, two states among those with limited reforms had consistently higher average claims payments, raising the overall average among this group of states. Absent the claims experience of these two states, the average claims payment for states with limited reforms from 1996 through 2002 would decrease to $11, only slightly higher than the $10 in states with these damage caps. Average growth in per capita claims payments for all physicians was also lower among the states with caps on noneconomic damages than among the states with limited reforms. From 1996 through 2002 average per capita claims payments grew by 5 and 6 percent in the states with noneconomic damage caps of $250,000 and $500,000 or less, respectively, compared to 10 percent in the states with limited reforms. However, the growth in these payments also varied widely among states in any given year and within individual states from year to year. For example, from 2001 to 2002, the average growth in claims payments on an individual state basis ranged from a 68 percent decrease in the District of Columbia to a 70 percent increase in Wyoming. Within the same state, growth rates fluctuated widely from year to year. For example, Mississippi experienced an 18 percent decrease in claims payments from 1999 to 2000, followed by a 61 percent increase in 2001, and a 5 percent decrease in 2002. The claims payment data reported to NPDB that we analyzed contain certain limitations. The data include malpractice claims against licensed physicians, and not against other institutional providers such as hospitals and nursing homes, thus limiting the overall completeness of the data across all providers. In addition, as we have previously reported, certain claims payments may be underreported to NPDB. When physicians are not specifically named in a malpractice settlement, the related claims payments may not be reported. Nevertheless, because insurers must report payment of claims against physicians subject to federal law and not varying state laws, NPDB data are useful in comparing trends across states. Other sources of claims payment data are subject to limitations of completeness or comparability. See appendix II for more information on the limitations of NPDB and other claims data sources. For states that have adopted certain tort reforms, especially caps on noneconomic damages, other studies have also found associations with lower claims payments. In its recent analysis of malpractice premiums and claims payments in states with various medical malpractice tort limitations, CBO found that caps on damage awards result in lower malpractice costs. Another study based on claims data in 19 states showed that direct reforms were associated with a smaller percentage of claims resolved with some compensation to plaintiffs and reduced claim frequency. In contrast, other researchers who have examined the effect of indirect tort reforms on malpractice costs have found mixed results. One study found that indirect reforms did not reduce malpractice cost indicators, while another found that a greater number of reforms (both direct and indirect) were associated with lower malpractice costs. These studies have also relied on claims data that have limitations in terms of their completeness and comparability. Differences in malpractice premiums and claims payments across states are influenced by several factors other than noneconomic damage caps. First, the manner in which damage caps are administered can influence the ability of the cap to restrain claims and thus premium costs. Some states permit injured parties to collect damages only up to the specified level of the cap regardless of the number of defendants, while other states permit injured parties to collect the full cap amount from each defendant named in a suit. Malpractice insurers told us that imposing a separate cap on amounts recovered from each of several defendants increases total claims payouts, which can hinder the effectiveness of the cap in constraining premium growth. Second, tort reforms unrelated to caps can also affect premium and claims costs. For example, California tort reform measures not only include a $250,000 cap but also allow other collateral sources to be considered when determining how much an insurer must pay in damages and allow periodic payment of damages rather than requiring payment in a lump sum, among other measures. Malpractice insurers told us that these provisions in addition to the cap have helped to constrain premium growth in that state. In Minnesota, which has no caps on damages but has relatively low growth in premium rates and claims payments, trial attorneys maintain that prescreening requirements reduce claim costs and premiums by preventing some meritless claims from going to trial. Third, state laws and regulations unrelated to tort reform, such as premium rate regulations, vary widely and can influence premium rates. Some states such as Minnesota and Mississippi tend not to regulate rates, while others, such as California, require state approval of the premium rates charged by insurers. Finally, insurers’ premium pricing decisions are affected by their losses on medical malpractice claims and income from investments, and other market conditions such as the level of market competition among insurers and their respective market shares. We could not determine the extent to which differences in premium rates and claims payments across states were attributed only to damage caps or also to these additional factors. We received comments on a draft of this report from three independent health policy researchers and from AMA. Each of the researchers has expertise in malpractice-related issues and has conducted and published research on the effects of malpractice pressures on the health care system, and two of the three are physicians. The independent researchers generally concurred with our findings and provided technical comments, which we incorporated as appropriate. In its written comments, AMA questioned our finding that rising malpractice premiums have not contributed to widespread health care access problems, expressing concern that the scope of our work limited our ability to fully identify the extent to which malpractice-related pressures are affecting consumers’ access to health care. We disagree with AMA, as explained below. However, in response to AMA and the other reviewers’ comments, we clarified the report’s discussion of the scope of work and methods used to assess health care access issues. AMA’s comments fell into four general areas: completeness of evidence examined, measures used to assess access problems, time lags in available data, and the cost and impact of defensive medicine. AMA questioned our finding that access problems were not widespread based on our work in 5 states, whereas it has identified 18 states “in a full- blown liability crisis.” It further cited results from its own recent physician survey on professional liability as evidence that medical liability concerns are causing physicians to limit their practices. The report clearly states the scope of our work and does not attempt to generalize our findings beyond the 5 states with reported problems that we reviewed. However, these 5 states were among the most visible and often-cited examples of “crisis” states by AMA and other provider groups. We believe that our finding that malpractice-related concerns contributed to localized but not widespread access problems in these states provides relevant and important insight into the overall problem. With respect to AMA’s reference to evidence available from its own survey, our report notes that the low response rate of 10 percent to its survey precludes the ability to reliably generalize the survey results to all physicians. AMA suggested that we withhold release of the report until we contacted state and national medical and specialty associations to obtain more complete and accurate information about access to care problems and it provided contacts for associations in each of the five states with reported problems and for four national specialty associations. We made these contacts throughout the course of our work, and the information these associations provided formed the basis for many of our findings. As the draft report noted, we contacted state medical, hospital, and nursing home association representatives in each of the five states with reported problems. We also contacted nine national medical and specialty associations, including three of the four AMA cited, which were specified in the draft report. In response to AMA’s comments, we added an appendix to specify the names of each national and state provider association we contacted during the course of our work. AMA commented that we failed to account for the two clinical areas of patient care in which impairment of access has been the most egregious: obstetrical and ER services. It attributed its concern to our acknowledgment in the report that we were unable to use Medicare claims data to investigate reported concerns about these services. Because of the recognized limitations of Medicare claims data for these and other services, we used other methods to explore whether malpractice-related pressures had affected access to ER on-call surgical services and newborn deliveries and indeed found—and reported—evidence of access problems for these services in localized areas. In response to AMA and technical comments from the other reviewers, we clarified the report’s discussion of our methodology for this issue. AMA commented that using aggregated data on physician supply to draw conclusions about access to care is problematic. It said that physicians tend to hold multiple state licenses and typically retain their licenses when they relocate their practices, thus potentially obscuring the supply of practicing physicians, and overall counts of physicians can obscure the impact of changes for different specialties and different jurisdictions. We agree that measuring changes in physician supply—especially changes due to malpractice-related issues—and the related effects on access to care is problematic. Sharing AMA’s concerns, during the course of our work we obtained available data reported by state medical licensing agencies for newly licensed physicians and for physicians practicing in the state whenever possible rather than for all licensed physicians and contrasted those data with reports of departing physicians. As noted in the draft report, although we reported physician supply and practice changes at the state level, the number of recent departures attributed specifically to malpractice concerns was relatively small and usually not concentrated in particular locales. Also as noted in the draft report, we further explored reports of specialty-specific problems, such as orthopedic surgeons in Pennsylvania and OB/GYNs in Nevada. For example, we analyzed rates of all procedures performed by orthopedic surgeons in Pennsylvania and found them to be growing, and called a random sample of OB/GYN practices in Clark County, Nevada, and on that basis determined that obstetrical care was readily available. Moreover, our Medicare claims analysis of certain high-risk services was specialty-specific. For example, to assess assertions by orthopedic surgeons that they have reduced the provision of spinal surgeries and joint revisions and repairs, our analysis was limited to only those services performed by orthopedic surgeons. AMA commented that our analysis of Medicare claims data as of June 2002 does not capture the current experience of physician decisions to curtail certain services or to retire or relocate their practices, the impact of which takes time to develop. We agree it is challenging to identify data that are sufficiently current and reliable to describe the effects of reported problems. However, we reported that premium increases began about 2000, and others have found that premiums began increasing as early as the late 1990s. We therefore believe that analyzing Medicare claims data through June 2002 provides important insights into at least 2 years of this most recent period of rising premiums. Moreover, we augmented our Medicare claims analysis with more recent qualitative data, such as interviews in late 2002 and early 2003, with national and state provider associations and local providers in areas where access problems were reported to exist. AMA commented that while specific estimates of defensive medicine costs have not been conclusive, the vast majority of peer-reviewed research indicates that those costs are enormous, in the tens of billions of dollars per year. To support this point, AMA cited three recent government studies. As our report notes, the peer-reviewed literature attempts to quantify the extent and sometimes the cost of defensive medicine under narrowly defined clinical circumstances that cannot be generalized more broadly. Two of the three government studies that AMA cited are examples of what we believe to be overgeneralizations of prior study results. We cite one of these by way of example in our report. The third government study AMA cited does not address the cost of defensive medicine but instead explicitly notes the difficulty of estimating such costs and the speculative nature of existing estimates. AMA also commented that our draft report ignored the impact of defensive medicine costs in terms of patient access, expressing the view that these costs are ultimately reflected in rising health insurance premiums that contribute substantially to the number of uninsured. Our draft report noted that, because of the absence of data to reliably measure overall malpractice-related costs—such as the combined cost of malpractice insurance premiums, litigation, and defensive medicine practices—we did not assess the indirect impact on access to care that may result from any added costs that malpractice pressures impose on the health care system. In response to AMA’s comment, we moved our discussion of this point to the report’s Results in Brief. As agreed with your offices, unless you publicly announce this report’s contents earlier, we plan no further distribution until 30 days after its issue date. At that time, we will send copies to other interested congressional committees and Members of Congress. We will also make copies available to others on request. In addition, this report is available at no charge at the GAO Web site at http://www.gao.gov. Please call me at (202) 512-7118 or Randy DiRosa at (312) 220-7671 if you have any questions. Other major contributors are listed in appendix IV. During the course of our work, we contacted a number of national and state health care provider associations in order to identify the actions health care providers have taken in response to malpractice pressures and the localized effects of any reported actions on consumers’ access to health care. In response to concerns about rising malpractice premiums, we examined how health care provider responses to rising premiums have affected access to health care, what is known about how rising premiums and fear of litigation cause health care providers to practice defensive medicine, and how rates of growth in malpractice premiums and claims payments compare across states with varying levels of tort reform laws. To evaluate how actions taken by physicians in response to malpractice premium increases have affected consumers’ access to health care, we focused our review at the state level because reliable national data concerning physician responses to malpractice pressures were not available. We selected nine states that encompass a range of premium pricing and tort reform environments. Five of the states—Florida, Mississippi, Nevada, Pennsylvania, and West Virginia—are among those cited as “crisis” or “problem” states by the American Medical Association (AMA) and other health care provider organizations based on such factors as higher than average increases in malpractice insurance premium rates, reported difficulties obtaining malpractice coverage, and reported actions taken by providers in response to their concerns about rising premiums and malpractice litigation. Four of the states—California, Colorado, Minnesota, and Montana—are not cited by provider groups as experiencing malpractice-related problems. (See table 3.) In each of the nine states we reviewed, we contacted or interviewed officials from associations representing physicians, hospitals, and nursing homes to more specifically identify the actions physicians have taken in response to malpractice pressures and the localized effects of any reported actions on access to services. (See app. I for a complete list of the provider organizations we contacted at the state and national levels.) Such actions were reported only in the five states with reported problems. In these five states we obtained and reviewed the evidence upon which the reports were based. Evidence of physician departures, retirements, practice closures, and reduced availability of certain hospital-based services consisted of survey results, information compiled and quantified by provider groups, and unquantified anecdotal reports collected by provider groups. Although we did not attempt to confirm each report cited by state provider groups, we judgmentally targeted follow-up contacts with local providers where the reports suggested potentially acute consumer access problems or where multiple reports were concentrated in a geographic area. With the local providers we contacted directly, including representatives of physician practices, clinics, and hospitals, we discussed the reports provided by the state provider groups and explored the resulting implications for consumers’ access to health care. In total, we contacted 49 hospitals and 61 clinics and physician practices in the five states. From these contacts we identified examples of access problems that were related to providers’ concerns about malpractice-related pressures as well as examples of provider actions that did not appear to affect consumer access or were not substantiated. We separately examined evidence of specific high-risk services that providers reportedly reduced in response to concerns about malpractice pressures. Such evidence consisted of results from surveys conducted by national and state-level medical, hospital, and specialty associations that identified the high-risk procedures physicians reported reducing or eliminating in response to malpractice pressures. High-risk services commonly identified in these surveys included spinal surgeries, joint revisions and repairs, mammograms, physician services in nursing homes, emergency room services, and obstetrics. We analyzed Medicare utilization data to assess whether reported reductions in three of these high-risk services—spinal surgery, joint revisions and repairs, and mammograms—-have had a measurable effect on consumers’ access to these services. To calculate service utilization rates per thousand fee-for- service Medicare beneficiaries enrolled in part B, we used Medicare part B physician claims data from January 1997 through June 2002 and the Medicare denominator files from 1997 through 2001. For 2002, we estimated each state’s part B fee-for-service beneficiary count by adjusting the 2001 count by the change in the 65 and older population between 2001 and 2002 and the change in Medicare beneficiaries enrolled in part B managed care plans between January 1 and July 1, 2002. To assess what is known about how rising premiums and fear of litigation cause health care providers to practice defensive medicine, we reviewed studies that examined the prevalence and costs of defensive medicine and the potential impact of tort reform laws on mitigating these costs that were published in 1994 or later, generally in peer-reviewed journals, or were conducted by government research organizations. We identified these studies by searching databases including MEDLINE, Econlit, Expanded Academic ASAP, and ProQuest; and through contacts with experts and affected parties. Several studies published prior to 1994 were reviewed by the Office of Technology Assessment (OTA) in its comprehensive 1994 report on defensive medicine, which we included in our review. In addition, we also explored the issue with medical provider organizations and examined the results of several recent surveys, including those conducted by national health care provider organizations, in which providers were asked about their own defensive medicine practices. To assess the growth in medical malpractice premium rates and claims payments across states, we compared trends in states with tort reforms that include noneconomic damage caps (4 states with a $250,000 cap and 8 states with a $500,000 or less cap) to the 11 states (including the District of Columbia) with limited reforms and the average for all states. We focused our analysis on those states with noneconomic damage caps as a key tort reform because such caps are included in proposed federal tort reform legislation and because published research generally reports that such caps have a greater impact on medical malpractice premium rates and claims payments than some other types of tort reform measures. We did not separately assess trends in the 28 states with various other tort reforms because of the wide range of often dissimilar and incomparable tort reforms that are included among these states. Because research suggests that any impact of tort reforms on premiums or claims can be expected to follow the implementation of the reforms by at least 1 year, we grouped states into their respective categories based on reforms that had been enacted no later than 1995 and reviewed premium rate and claims payment data for the period 1996 through 2002. We relied upon a summary of state tort reforms compiled by the National Conference of State Legislatures (NCSL) to place states within the reform categories and reviewed the information with respect to the 9 study states for accuracy in February 2003. (See table 4.) To assess the growth in medical malpractice premiums, we analyzed state- level malpractice premium rates for the specialties of general surgery, internal medicine, and obstetrics/gynecology (OB/GYN) reported by insurers to the Medical Liability Monitor (MLM) from 1996 to 2002. Our analysis does not capture the experience of other physician specialties and other types of medical providers such as hospitals and nursing homes. MLM reports base premium rates that do not reflect discounts or rebates that may effectively reduce the actual premium rates charged. We generally excluded data from insurers that did not consistently report premium rates across most of the years studied. We also excluded surcharges for contributions to state patient compensation funds (PCF) because these were inconsistently reported across states and years. We adjusted rates for inflation using the urban consumer price index. We calculated a composite average premium across all three specialties, as well as specialty-specific average premiums, for each year. We then analyzed growth rates in these average premiums from 1996 through 2002 across all states. To assess the growth in medical malpractice claims payments, we analyzed state level claims payment data from the National Practitioner Data Bank (NPDB) from 1996 to 2002, which had been adjusted to 2002 dollars. We calculated average per capita claims payments and their growth rates for each state across this time frame. Assuming a 1-year lag to allow the reforms to affect these indicators, we calculated overall averages of these indicators from 1996 to 2002, and used these averages to compare average per capita payments and their rates of growth across the reform categories. The NPDB claims data we analyzed contain notable limitations. First, they include malpractice claims against licensed physicians only, and not against institutional providers such as hospitals and nursing homes. Secondly, as we have previously reported, NPDB claims may be underreported. When physicians are not specifically named in a malpractice judgment or settlement, the related claims are not reported to the data bank, and certain self-insured and managed care plans may be underreported as well. The extent to which this underreporting occurs is not known. Finally, NPDB data do not capture legal and other administrative costs associated with malpractice claims. We examined other sources of information on claims payments, and found none to be a comprehensive data source for each state that captures malpractice claims costs from all segments of the malpractice insurance market—commercial insurers, physician-mutual companies, and self- insured and other groups. For example, data reported to the National Association of Insurance Commissioners (NAIC) have been used in other research; however, data are not reported consistently across states and exclude payments from certain insurers. According to NAIC officials, the laws that dictate reporting requirements differ by state, and not all insurers are required to report in every state. They also stated that exempted insurers can include those operating in a single state and certain physician mutual companies. In all states, self-insured groups, which represent a substantial proportion of the medical malpractice insurance market, are exempted from reporting. Similarly, the Insurance Services Office (ISO) is a private organization providing state-level price advisory information to state insurance regulators. However, ISO does not operate in all states, nor does it uniformly collect data on hospital claims, or claims from physician mutual companies, and represents only 25 to 30 percent of the total medical malpractice market. Physician Insurers Association of America is an association of physician mutual companies; however, it does not share proprietary state-level claims data. Jury Verdict Research is a private research organization that collects data from several different sources, including attorneys and media reports, among others. Some have criticized the accuracy of this data set for several reasons, including a varied and unsystematic data collection process and because large verdict awards may be more likely to be included than smaller verdict awards. Table 5 summarizes the scope, methods, results, and limitations of studies that examined the prevalence and costs of defensive medicine practices or the potential impact of tort reform laws on mitigating defensive medicine costs. Studies were published in 1994 or later, generally in peer-reviewed journals, or were conducted by government research organizations. In addition to the person named above, key contributors to this report were Gerardine Brennan, Iola D’Souza, Corey Houchins-Witt, and Margaret Smith. Medical Malpractice Insurance: Multiple Factors Have Contributed to Increased Premium Rates. GAO-03-702. Washington, D.C.: June 27, 2003. National Practitioner Data Bank: Major Improvements Are Needed to Enhance Data Bank’s Reliability. GAO-01-130. Washington, D.C.: November 17, 2000. Medical Malpractice: Effects of Varying Laws in the District of Columbia, Maryland, and Virginia. GAO/HEHS-00-5. Washington, D.C.: October 15, 1999. Medical Liability: Impact on Hospital and Physician Costs Extends Beyond Insurance. GAO/AIMD-95-169. Washington, D.C.: September 29, 1995. Medical Malpractice: Medicare/Medicaid Beneficiaries Account for a Relatively Small Percentage of Malpractice Losses. GAO/HRD-93-126. Washington, D.C.: August 11, 1993. Medical Malpractice: Experience with Efforts to Address Problems. GAO/T-HRD-93-24. Washington, D.C.: May 20, 1993. Medical Malpractice: A Continuing Problem with Far-Reaching Implications. GAO/T-HRD-90-24. Washington, D.C.: April 26, 1990.
The recent rising cost of medical malpractice insurance premiums in many states has reportedly influenced some physicians to move or close practices, reduce high-risk services, or alter their practices to preclude potential lawsuits (known as defensive medicine practices). States have revised tort laws under which malpractice lawsuits are litigated to help constrain malpractice premium and claims costs. Some of these tort reform laws include caps on monetary penalties for noneconomic harm, such as for plaintiffs' pain and suffering. Congress is considering legislation similar to some states' tort reform laws. GAO examined how health care provider responses to rising malpractice premiums have affected access to health care, whether physicians practice defensive medicine, and how growth in malpractice premiums and claims payments compares across states with varying tort reform laws. Because national data on providers' responses to rising premiums are not reliable, GAO examined the experiences in five states with reported malpractice-related problems (Florida, Nevada, Pennsylvania, Mississippi, and West Virginia) and four states without reported problems (California, Colorado, Minnesota, and Montana) and analyzed growth in malpractice premiums and claims payments across all states and the District of Columbia. Actions taken by health care providers in response to rising malpractice premiums have contributed to localized health care access problems in the five states reviewed with reported problems. GAO confirmed instances in the five states of reduced access to hospital-based services affecting emergency surgery and newborn deliveries in scattered, often rural, areas where providers identified other long-standing factors that also affect the availability of services. Instances were not identified in the four states without reported problems. In the five states with reported problems, however, GAO also determined that many of the reported provider actions were not substantiated or did not affect access to health care on a widespread basis. For example, although some physicians reported reducing certain services they consider to be high risk in terms of potential litigation, such as spinal surgeries and mammograms, GAO did not find access to these services widely affected, based on a review of Medicare data and contacts with providers that have reportedly been affected. Continuing to monitor the effect of providers' responses to rising malpractice premiums on access to care will be essential, given the import and evolving nature of this issue. Physicians reportedly practice defensive medicine in certain clinical situations, thereby contributing to health care costs; however, the overall prevalence and costs of such practices have not been reliably measured. Studies designed to measure physicians' defensive medicine practices examined physician behavior in specific clinical situations, such as treating elderly Medicare patients with certain heart conditions. Given their limited scope, the study results cannot be generalized to estimate the extent and cost of defensive medicine practices across the health care system. Limited available data indicate that growth in malpractice premiums and claims payments has been slower in states that enacted tort reform laws that include certain caps on noneconomic damages. For example, between 2001 and 2002, average premiums for three physician specialties--general surgery, internal medicine, and obstetrics/gynecology--grew by about 10 percent in states with caps on noneconomic damages of $250,000, compared to about 29 percent in states with limited reforms. GAO could not determine the extent to which differences in premiums and claims payments across states were caused by tort reform laws or other factors that influence such differences. In commenting on a draft of this report, three independent reviewers with expertise on malpractice-related issues generally concurred with the report findings, while the American Medical Association (AMA) commented that the scope of work was not sufficient to support the finding that rising malpractice premiums have not contributed to widespread health care access problems. While GAO disagrees with AMA's point of view, the report was revised to better clarify the methods and scope of work for this issue.
HHS is charged with ensuring that HHAs meet conditions of participation in the Medicare program that are adequate to protect the health and safety of beneficiaries. As shown in table 1, Medicare has 12 conditions of participation covering such areas as patient rights; acceptance of patients, plans of care, and medical supervision; and skilled nursing services. Most conditions, in turn, comprise more detailed standards; for example, the skilled nursing condition has two standards—one addresses the duties of registered nurses and the other the duties of licensed practical nurses. The conditions and standards are further clarified in interpretive guidelines, which explain relevant statutes and regulations. in compliance with its conditions of participation. This survey and certification process is administered by HCFA through state survey agencies—usually components of the state health departments. HCFA funds these survey agencies to assess HHAs against Medicare’s conditions of participation and associated standards. Surveys are conducted on-site at the HHA and involve activities such as clinical records review and home visits with patients. HCFA’s State Operations Manual provides guidance to state surveyors on conducting their surveys. Once an HHA passes its initial survey and meets certain other requirements, HCFA certifies it as a Medicare provider and issues a provider number, which the agency uses to bill Medicare. To retain its certification, an HHA must remain in compliance with all of the conditions of participation. Each HHA is supposed to be recertified every 12 to 36 months following the same process used in the initial survey process, with the frequency depending upon factors such as whether ownership changed and the results of prior surveys. But complaints about HHA services may trigger an earlier survey. HHAs can lose their certification and be terminated from the program if they do not comply with one or more conditions; for example, an HHA providing substandard skilled nursing care that threatens patient health and safety can be terminated. However, HHAs not complying with a condition of participation can avoid termination by implementing corrective actions. Practically anyone who meets state or local requirements to start an HHA can be virtually assured of Medicare certification. It is rare that any new HHA is found not to meet Medicare’s three fundamental certification requirements: (1) being financially solvent; (2) complying with title VI of the Civil Rights Act of 1964, which prohibits discrimination; and (3) meeting Medicare’s conditions of participation. HHAs self-certify their solvency, agree to comply with the act, and undergo a very limited initial certification survey that few fail. Currently, HCFA certifies about 100 new HHAs each month. deterrent to agency certification unless that criminal activity specifically prohibits the individual from Medicare participation. Each certified HHA must provide skilled nursing services and one other covered service—physical, speech, or occupational therapy; medical social services; or home health aide services. HHAs can offer all of these services if they choose to do so. Only one of an HHA’s services must be delivered exclusively by its staff; any additional covered services the HHA offers can be provided either directly or under contract with another health care organization that does not have to be Medicare certified. During the initial certification process, surveyors conduct what is called a standard survey; this survey is required by statute to assess the quality of care and scope of services the HHA provides as measured by indicators of medical, nursing, and rehabilitative care. The standard survey addresses an HHA’s compliance with 5 of the 12 conditions of participation plus one of the standards associated with a sixth condition that HCFA believes best evaluate patient care (see table 1). If surveyors identify substandard care during the standard survey, they are to conduct a more in-depth review of the HHA’s compliance with the other conditions of participation. These initial surveys often take place so soon after an HHA begins operating that surveyors have little information with which to judge the quality of care an HHA provides or the HHA’s potential for providing such care. We found that initial surveys frequently are made when HHAs have served as few as one patient for less than 1 month and have not yet provided all the services for which they are to be certified. The surveyor may never see any patients or otherwise assess the care the HHA is providing, even though visiting patients is recognized by HCFA and state surveyors as the best way to evaluate an HHA’s care. Furthermore, the HHAs are typically caring for non-Medicare beneficiaries at the time of their initial survey; these patients may have medical conditions that differ from those of Medicare beneficiaries needing home health care. been (1) enrolling patients who were either ineligible for the Medicare home health benefit or who had been referred for care without a physician’s orders and (2) hiring home health aides on the condition that they first recruit a patient. Approximately 10 months following initial certification, state surveyors substantiated the complaints and also found that the HHA was not complying with four conditions and multiple standards, including four standards that the HHA had been cited for violating during its initial survey. The surveyors also identified 13 cases in which they suspected the HHA provided unnecessary services or served ineligible beneficiaries; the surveyors referred these cases to the Medicare claims processing contractor. One month later, the surveyors conducted a follow-up survey and found that the agency had implemented corrective actions, as it had following its initial survey. No further surveys had been conducted at the time of our review. Another individual with no home health care experience started a California HHA, which was Medicare certified in 1992. Within 1 year of certification, state surveyors and the Medicare claims processing contractor received numerous complaints alleging that the HHA had served patients ineligible for the Medicare benefit, falsified medical records, falsified the credentials of the director of nursing, and used staff inappropriately. A recertification survey about 15 months after initial certification found that the HHA was not complying with multiple conditions of participation and had endangered patient health and safety. By September 1993, after Medicare had paid the HHA over $6 million, the HHA closed. The owner, a former drug felon, and an associate later pled guilty to defrauding Medicare of over $2.5 million. officials said that this would not be a reasonable requirement for all HHAs seeking certification. In some rural states, 10 patients may represent an entire year’s patient workload. Setting a 10-patient minimum on a national basis could therefore result in denying beneficiaries access to home health care services if they live in sparsely populated areas of the county, according to the HCFA officials. Medicare’s recertification process does not ensure that only those HHAs that provide quality care in accordance with Medicare’s conditions of participation remain certified. The primary problems are that (1) HHAs do not have to periodically demonstrate compliance with all of Medicare’s conditions of participation; (2) surveyors do not fully review an HHA’s branch office operations; (3) rapidly growing HHAs do not receive more frequent surveys, even though rapid growth has been linked to difficulties in complying with Medicare’s conditions; and (4) HHAs repeatedly cited for serious deficiencies identified during a standard survey are rarely terminated or otherwise penalized. HCFA initially certifies and then recertifies most HHAs without requiring them to ever demonstrate compliance with all the conditions of participation. Instead, HCFA asks the surveyors to initially limit their evaluation of HHAs to the standard survey and then expand the survey to the other conditions only if they find problems. As a result, HCFA and Medicare patients usually do not know whether an HHA is complying with conditions not included in the standard survey. that address the HHA’s operations and the care it provides to Medicare beneficiaries. Nearly three-quarters of the HHAs failed to comply with at least one of the conditions not covered in the standard survey, and 21 of the 44 HHAs either voluntarily withdrew their certification or had their certification terminated by HCFA. Although this project targeted HHAs suspected of problems, it does demonstrate that criteria other than those used in the limited standard survey may be better predictors of compliance with all the conditions of participation. HCFA defines a branch office of an HHA as a unit within the geographic area served by the parent office that shares administration, supervision, and services with the parent office. Since the mid-1980s, many HHAs have created branch offices. As shown in figure 1, about 2,200 HHAs operated nearly 5,500 branch offices in January 1997—over four times the number in November 1993. In Texas, for example, we identified 106 HHAs with 3 or more branches, and 1 HHA had 25 branch offices. Since they are considered to be an integral part of an HHA, branches are not required to independently meet the conditions of participation. Further, HCFA does not require surveyors to visit patients served by each branch office. Since new branch offices do not undergo an initial certification survey, HCFA cannot be assured that they meet Medicare’s definition of a branch office. And, most importantly, not directly surveying branch operations means that quality-of-care issues within an HHA’s overall operations may be missed. When branches have been surveyed because the HHA wanted to convert them to parent offices, significant problems have been found. Several examples follow: In California, surveyers found that one branch of an HHA cared for 581 patients over the 12 months ending September 1996—more than the average number of patients cared for by an HHA in the state during that time. Moreover, the branch was not complying with one condition of participation, and the surveyers recommended denial of the HHA’s initial certification. Among its problems was that the branch had no system in place to ensure that its contractor staff had the appropriate qualifications and licenses. Similarly, a branch office of a Massachusetts HHA had cared for 69 patients since the HHA’s last survey. The branch was denied initial certification as a parent office because it failed to meet nine standards associated with several conditions of participation. For example, the surveyors found that the branch office, in 10 of 12 cases examined, did not follow the plan of care and provide services as frequently as ordered by a physician. At the time of our review, the HHA had not yet submitted its correction plan and had not been certified as a parent office. While HCFA’s guidance allows surveyors to conduct the entire recertification survey of an HHA at a branch office, state surveyors told us that this is seldom, if ever, done. Branch offices typically do not maintain all the personnel files or clinical information that surveyors need in their evaluation. As a practical matter, surveyors told us that they may not have time to conduct home visits with branch office patients and still finish the survey within their allotted time and resources. growing rapidly or maintaining a stable level of operations—information state surveyors generally would not have before conducting their survey. New HHAs have the potential for rapid growth and, as a result, are more likely to have difficulties complying with Medicare’s conditions of participation. As shown in table 2, we found that nearly one-fourth of the HHAs initially certified in 1993 in California and Texas received Medicare payments exceeding $1 million in 1994—their first full year of Medicare certification—and the average number of patients they treated in a year at least tripled between 1993 and 1995. For example, in 1993, one California HHA treated 11 patients and received $33,000 from Medicare; in 1995, the HHA treated 1,810 patients and received $12.7 million in Medicare payments. Also, the percentage of these rapidly growing HHAs cited for noncompliance with the conditions of participation exceeded the national norm. Nationwide, about 3 percent of all HHAs each year are cited for failing to meet Medicare’s conditions of participation. In contrast, 40 percent of the high-growth HHAs in California and 11 percent of the high-growth Texas HHAs did not meet the conditions in their most recent surveys. HCFA issued its survey frequency criteria in May 1996, after legislation authorized it to increase the maximum interval between surveys from 15 months to 3 years. As previously noted, HCFA’s criteria consider factors such as an HHA’s prior survey results, changes in ownership, and complaints. By not considering an HHA’s rate of growth when setting survey frequency, however, HCFA is missing an opportunity to more quickly identify and correct compliance deficiencies. Such information is available from Medicare contractors and HCFA. Once certified as a Medicare provider, an HHA is virtually assured of remaining in the program even if repeatedly found to be violating Medicare’s conditions of participation and associated standards. There are no penalties short of termination because HCFA has not developed intermediate sanctions as it was authorized by the Congress to do a decade ago. HCFA officials told us that they wanted experience with the skilled nursing facility intermediate sanctions, which became effective in July 1995, before implementing intermediate sanctions against HHAs. Until the advent of ORT, the likelihood of an HHA’s being terminated from the Medicare program was remote. In fiscal years 1994, 1995, and 1996, about 3 percent of all certified HHAs were terminated, and most of these were voluntary terminations arising from either mergers or closures. Only about 0.1 percent of all certified HHAs in fiscal years 1994 and 1995 and 0.3 percent in fiscal year 1996 were involuntarily terminated as a result of noncompliance with the conditions of participation. California accounted for almost half of the 32 involuntary terminations nationwide in 1996, with 8 of its 15 involuntary terminations that year stemming from the ORT project. To terminate an HHA, the surveyors must find that it did not comply with one or more conditions and remained out of compliance 90 days after a survey first identified the noncompliance. If an HHA threatened with termination takes corrective action and state surveyors verify through site visits that this action has brought the HHA back into compliance, HCFA will cancel the termination process. Under Medicare’s termination procedures, HHAs remain in the program, to the potential detriment of beneficiaries, even if they repeatedly fail to comply with Medicare’s conditions of participation. the three most recent surveys, this HHA had been cited for not following physicians’ orders in the written plan of care. The HHA remains certified despite its repeated problems. Moreover, on a Texas HHA’s first recertification survey, 1 year after initial certification, the state surveyor found four standards not met and referred several cases of possible fraud to the Medicare contractor. Within 10 months of that survey, state surveyors resurveyed the HHA and found it was not in compliance with seven conditions of participation, and the previously cited deficiencies in meeting standards had not been corrected. HCFA issued a termination letter, but within 2 months of the last survey the HHA had corrected the deficiencies, and the termination process was halted. On a complaint investigation 6 months after the deficiencies had been corrected, the surveyors found the HHA was again out of compliance with three of the same seven conditions. On this most recent survey, the surveyors attributed the death of one patient directly to this HHA. At the time her attorney advised her to surrender her state license and Medicare certification, the owner/operator of this HHA had already hired a nurse consultant to bring the HHA back into compliance. HHAs are not threatened with termination if they are complying with the conditions of participation but are violating one or more standards and subsequently submit a corrective action plan. But surveyors often do not revisit the HHA to verify that it has implemented the plan and actually corrected the deficiencies. For example, Illinois surveyors did not revisit 13 of 21 HHAs that had submitted plans to correct their violations of Medicare’s standards. available to HCFA to penalize deficient HHAs is to terminate them from the program. HHAs provide valuable services that enable a growing number of beneficiaries to continue living at home. Accompanying this increase in beneficiaries have been sharply increasing Medicare payments and rapidly rising numbers of certified HHAs. HCFA’s HHA survey and certification process, however, fails to provide beneficiaries with reasonable assurance that their HHA meets Medicare’s conditions of participation and provides quality care. Yet, certification represents Medicare’s “seal of approval” on the services provided by an HHA. Our ongoing work suggests that it is simply too easy to become Medicare certified. Before they are certified, HHAs do not have to demonstrate a sustained capability to provide quality care to a minimum number of patients for all types of services. And because the requirements are minimal, HCFA certifies nearly all HHAs seeking certification. While many HHAs are drawn to the program with the intent of providing quality care, some are attracted by the relative ease with which they can become certified and participate in this lucrative, growing industry. HHAs can remain in the program with little fear of losing their certification. Most will never have to demonstrate compliance with all of the participation conditions, and, even if they are found out of compliance, temporary corrective actions are sufficient to allow them to continue to operate. These problems suggest that HCFA needs to pay closer attention to how it surveys and certifies HHAs. We expect that our upcoming report will contain specific recommendations on how HCFA can strengthen the survey and certification process so that it provides greater assurance that only those HHAs that provide quality care in accordance with requirements participate in Medicare. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions you or Members of the Committee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed how Medicare: (1) controls the entry of home health agencies (HHA) into the Medicare Program; and (2) ensures that HHAs in the program comply with Medicare's conditions of participation and associated standards. GAO noted that: (1) it is finding that Medicare's survey and certification process imposes few requirements on HHAs seeking to serve Medicare patients and bill the Medicare program; (2) the certification of an HHA as a Medicare provider is based on an initial survey that takes place so soon after the agency begins operating that there is little assurance that the HHA is providing or capable of providing quality care; and (3) moreover, once certified, HHAs are unlikely to be terminated from the program or otherwise penalized, even when they have been repeatedly cited for not meeting Medicare's conditions of participation and for providing substandard care.
VA provides health care through a direct delivery system of 173 hospitals and over 200 free-standing clinics nationwide. VA facilities also purchase health care from other public and private providers under certain conditions, such as medical emergencies. VA served over 2.6 million veterans at a cost of about $16.2 billion in fiscal year 1995. In 1995, VA restructured its system into 22 VISNs. Each contains from 5 to 11 hospitals, as well as several clinics, covering a specified geographic area that reflects patient referral patterns and the availability of medical services. The networks are responsible for consolidating and realigning services within their areas to provide an interlocking, interdependent system of care. VA expects to improve efficiency by trimming management layers, consolidating redundant medical services, and better using available private and public resources. Another important change in the VA health care system is an enhanced focus on the provision of primary care and an increased emphasis on shifting care from inpatient to outpatient settings. VA is in the process of implementing a primary care approach in all of its clinics. Under primary care, veterans are expected to enroll in an outpatient clinic, where they are assigned to a primary care physician or physician group. When needed, VA primary care physicians refer veterans to VA or community hospitals. Because non-VA physicians do not have admitting rights to VA hospitals, the workload of VA hospitals is driven almost entirely through referrals from its outpatient clinics. Northern California and parts of Nevada are served by the Sierra Pacific Network. The network operates the hospital beds in the Travis joint venture project as well as hospitals in Reno, Nevada, and in Fresno, San Francisco, and Palo Alto (with divisions in Livermore, Palo Alto, and Menlo Park). It also operates outpatient clinics at each of these locations as well as satellite outpatient clinics in Martinez, Redding, Oakland, Sacramento, and San Jose. Although the Air Force operates an outpatient clinic at DGMC, VA does not currently have an outpatient clinic at Travis. Figure 1 shows the location of the major VA facilities in the Sierra Pacific Network. (Figure notes on next page) The proposed Travis project is located in the Sierra Pacific Network’s Northern California Health Care System. NCHCS includes the clinics in Martinez, Oakland, Redding, and Sacramento and the hospital beds at Travis Air Force Base. NCHCS primarily serves veterans east of the San Francisco Bay and in the northern part of the state. Currently, the only VA hospital beds operated in this area are the 55 beds in the joint venture at Travis. Travis Air Force Base is located about 50 miles northeast of San Francisco and about 77 miles northeast of Palo Alto. It is about 44 miles southwest of Sacramento, 34 miles northeast of Martinez, 41 miles northeast of Oakland, and 179 miles south of Redding. The proposed Travis project service area is shown in figure 2. McClellan Hospital at Mather Air Force Base The 14 counties are Alameda, Butte, Colusa, Contra Costa, Glenn, Sacramento, Shasta, Siskiyou, Solano, Sutter, Tehama, Trinity, Yolo, and Yuba. The NCHCS service area continues to include the large veteran population in the East Bay (Oakland/Martinez) and Sacramento areas. Table 1 shows the number of veterans living in the four counties in the NCHCS service area with the largest veteran populations. Through its construction planning, VA expects to improve the geographic accessibility of VA hospital and outpatient care for veterans currently served by NCHCS, as well as for those who have not previously sought care from VA. When VA closed its hospital in Martinez, much of the area was left with limited access to VA hospital and outpatient care. In fiscal year 1991, the Martinez hospital had an average daily census of 235 patients. Although the Martinez hospital served veterans from much of northern California, most users came from the East Bay and Sacramento areas. In 1991, the Congress appropriated emergency funds to construct a replacement outpatient clinic and a nursing home on the grounds of the closed hospital. The replacement clinic—a prototype for the VA system—became operational in November 1992. It included modern ambulatory surgery capabilities, sophisticated imaging technology, and attractive surroundings. Construction of the nursing home was delayed pending demolition of the hospital building, but the nursing home is scheduled to open in the fall of 1996. In 1992, VA planners conducted a study to determine where to build a replacement hospital. The options considered included partially renovating and seismically retrofitting the closed Martinez hospital, constructing a new hospital in Sacramento, constructing dual hospitals in Martinez and Sacramento, and constructing a joint venture hospital at Travis Air Force Base. Although the dual hospital option was judged to offer the greatest improvement in accessibility, the cost was considered prohibitive. After further negotiations with the affected parties, which resulted in the Air Force’s offer to allow VA to establish some hospital beds at DGMC on an interim basis and reduce the number of beds to be included in the final construction project, VA decided on the 243-bed joint venture, including 170 new beds and 73 existing beds. Although VA sought funding for the hospital project in its fiscal year 1996 budget submission, the Congress did not fund the hospital aspect of the project. Instead, the Congress provided $25 million to construct only an outpatient clinic at Travis. Rather than going forward with construction of the clinic, however, VA, in its fiscal year 1997 budget submission, requested $32 million toward construction of the entire original $211 million project. Moreover, VA estimates that it will need about $67 million more in one-time activation costs for the completed facility and about $72 million a year to operate it. The proposed Travis project would probably add to existing excess hospital beds both in the VA system and in the community. Moreover, not enough low-income and service-connected veterans live near Travis Air Force Base to support a clinic of the size VA proposes. To support the clinic, VA would need to focus on attracting large numbers of higher-income veterans with no service-connected disabilities or attracting veterans from other NCHCS clinics. The 1992 decision to add 170 new hospital beds at Travis has essentially been overcome by events. Both VA and the private sector are increasingly shifting care to outpatient settings, decreasing demand for hospital care. Not only has VA been able to meet the demands for hospital care through use of existing VA and community beds, but there is also significant excess hospital capacity in VA, DOD, and community facilities. To support the proposed number of beds planned for the Travis project, VA would need to more than triple the number of people it serves. Such an increase in market share appears unlikely because the veteran population in the service area is projected to decrease by about 25 percent between 1995 and 2010. To the extent that VA is successful in increasing its market share by attracting veterans currently using community hospitals, the financial viability of community hospitals, particularly those in the vicinity of Travis Air Force Base, might be adversely affected. VA’s position that it needs to build 170 more hospital beds at Travis is based on the assumption that veterans will demand hospital care in 2005 at the same rate they did between 1989 and 1991. This assumption appears flawed given the changing health care delivery market. Because the data used in VA’s integrated planning model are several years old, the model does not fully reflect the decrease in hospital utilization occurring because of changes in medical practice and medical technology. For example, a few years ago, it was common practice for patients to remain in the hospital for 1 to 2 weeks following surgery. Now, however, it is common medical practice to get patients out of bed the day of or day after major surgery and to discharge them within a few days. In addition, new techniques, such as less invasive laparoscopic surgery, help shorten lengths of stay for those patients requiring hospital admission. Similarly, advances in medical technology and techniques, such as laser surgery, permit many procedures to be safely performed on an outpatient basis. Moreover, in the past few years, VA has made major strides toward shifting care to outpatient settings. For example, the performance expectations that the under secretary for health set for VISN directors establish goals for increasing both the percentage of surgeries performed on an outpatient basis and the percentage of hospital admissions shifted to outpatient settings. The NCHCS clinics served more veterans in fiscal 1995 than they did in 1992, and fewer veterans were admitted to hospitals in 1995 than in 1990, the last full year that the Martinez hospital was open. This reduced usage seems consistent with VA’s shifting of care from inpatient to outpatient settings. With the establishment of the recently constructed Martinez outpatient clinic, NCHCS became a model for the rest of the VA system. The Martinez clinic offers modern ambulatory surgery and sophisticated imaging technology, permitting much care to be delivered on an outpatient basis. The bed days of care provided to veterans served by the Martinez clinic are among the lowest in the VA system, according to the VISN director. The ambulatory surgery and imaging capabilities at Martinez also help reduce hospital admissions from other VA clinics. For example, the Sacramento and Oakland clinics refer some patients to Martinez for ambulatory surgery rather than admitting them to a hospital. As the Oakland, Sacramento, and Redding clinics’ ability to perform outpatient surgery is expanded, further reductions in hospital admissions might well result. VA is also moving towards nonhospital settings for patients who need subacute care. In 1991, VA provided a considerable amount of such care in its hospitals, and the 1992 plans for the proposed Travis project, for example, included 56 nonacute beds. The NCHCS clinics at Oakland, Martinez, and Sacramento—the primary clinics likely to generate admissions to the VA hospital at Travis—currently serve all veterans seeking outpatient care and place all veterans requiring hospital care in a VA or community bed. However, network and NCHCS officials told us, and we observed during our visits, that these clinics operate inefficiently, in part, because of space constraints, such as the lack of sufficient numbers of examining rooms. The fourth NCHCS clinic, in Redding, does not currently meet the needs of all veterans seeking care. The Redding clinic, which provides only primary care, evaluates all veterans seeking care but, according to the chief medical officer, does not serve higher-income veterans in the discretionary care category for hospital care or veterans who have no service-connected disabilities and do not receive a VA pension. According to the chief medical officer, the clinic was built to support 15,000 visits a year but provided 33,000 visits last year. A new, larger clinic is scheduled to open in November. In 1995, the four NCHCS clinics served over 33,000 veterans, providing a total of 338,000 outpatient visits. Veterans served by the four clinics were admitted to hospitals about 2,800 times, primarily for general medicine services, but also for surgical, neurological, and psychiatric services. This admission rate, about 85 admissions per 1,000 veterans served, supported an average daily census of about 75 hospital beds, or about 2 beds per 1,000 veterans served. VA’s proposal to build 170 new beds at Travis and obtain 18 additional beds in the existing Air Force hospital would more than quadruple VA’s current capacity of 55 beds. Because the hospital care needs of all current VA users are being met through use of existing VA and community beds, VA would need to attract significant numbers of new users to its health care system, or shift current hospital users to the Travis hospital, to justify the cost of the proposed additional beds. Given the limited potential to shift current hospital users from other VA hospitals and community hospitals to an expanded Travis project, VA would need to more than triple its market share of veterans living in the NCHCS service area. NCHCS clinics refer patients to any VA hospital in the Sierra Pacific Network but emphasize referrals to the Travis hospital. Clinic directors told us that referral decisions are based on where veterans live, the type of care they need, the urgency of their condition, the availability of beds, and where veterans would prefer to obtain care. NCHCS’ summary admission statistics show that, in fiscal year 1995, 52 percent of the admissions were to VA’s Travis hospital. Another 25 percent were sent to community hospitals. The remaining 23 percent went to other VA hospitals, primarily Palo Alto and San Francisco. The potential to fill additional beds at Travis by reducing the use of community hospitals appears limited because admissions to community hospitals are generally for treatment of emergent conditions—conditions requiring emergency care. Because patients with emergent conditions are not stable and require immediate hospitalization, they are transported by ambulance to the nearest hospital capable of providing the needed services. Because of the distance from Sacramento, Oakland, Redding, and Martinez to Travis Air Force Base, patients needing emergency care generally would not be transported to Travis even if more beds were available there. Such patients would continue to obtain care in community hospitals. If VA had additional beds at Travis Air Force Base, some of the veterans currently using the Palo Alto and San Francisco hospitals might be shifted to the Travis hospital. However, according to NCHCS clinic officials, many of the veterans referred to the Palo Alto and San Francisco hospitals were referred there because the veterans either lived closer to one of those facilities or needed specialized care not available at Travis. To effectively use the additional beds it is seeking to construct and obtain through transfer from the Air Force, VA would need to more than triple—from 33,000 to over 112,000—the number of veterans in the service area who use VA health care services. In fiscal year 1995, the four existing clinics treated about 33,000 veterans, supporting about two hospital beds for every 1,000 veterans using VA services. Assuming an 85-percent occupancy rate in the proposed hospital, VA would need to attract about 72,250 new users to maintain an average daily census of 145 in the 170 additional beds it is seeking to construct and about 7,650 new users to maintain an average daily census of 15 in the additional 18 beds the Air Force plans to transfer to VA. Utilization data from other VA medical centers support our estimate that VA would need to more than triple its market share of veterans living in the service area to support the proposed beds at Travis. The approximately 33,000 users in the service area were hospitalized a total of about 2,800 times during fiscal year 1995, maintaining an average daily census of 75 hospital patients. To maintain an average daily census of 145 (85-percent occupancy) in the new beds, VA would need to provide hospital care to about 6,500 additional patients each year, experience from existing medical centers suggests. For example, the Charleston, South Carolina, VA medical center, which had an average daily census of 145 in fiscal year 1995, treated about 5,923 patients. Similarly, the Iowa City, Iowa, VA medical center, with an average daily census of 142, treated 6,526 patients. Over 3,300 excess hospital beds exist in and near the areas that would be served by the proposed Travis project. First, veterans’ use of VA acute care beds in Palo Alto and San Francisco has declined by about 180 beds over the past 3 years, adding to excess acute care capacity. The medical center director from San Francisco indicated that the facility could accommodate at least 80 additional acute care patients per day. Similarly, the Palo Alto medical center director estimated that the new acute care hospital nearing completion there will have about 100 unused beds when it opens. Although these hospitals are not convenient for veterans in Sacramento and other areas north of Travis, for veterans living in Oakland and some other parts of the East Bay, the hospitals are closer than Travis. Second, the Air Force has unused beds at Travis that could potentially be used for VA inpatient care. For example, over 40 beds have been converted to office space. Third, significant excess hospital capacity exists in community hospitals in northern California, including the Sacramento, Martinez, Oakland, Redding, and Fairfield areas. For example, community hospitals in the counties where the VA facilities are located had average occupancy rates in 1995 ranging from about 40 percent (Solano County) to about 68 percent (Sacramento County). Overall, an average of 3,158 unused community hospital beds existed in the five counties on any given day (see table 2). Declining numbers of veterans are likely to lead to continuing declines in demand for VA hospital care. In fiscal year 1995, an estimated 412,000 veterans lived in the area that would be served by the proposed Travis project. By 2010, VA estimates that the veteran population will have decreased by 25 percent. Figure 3 shows the expected decrease in veterans’ population in the Travis service area. Veterans’ use of other VA hospitals in northern California is also expected to continue declining, due in large part to the decreasing veteran population. VA’s 1994 Integrated Planning Model estimates that veterans will use a total of 294 fewer beds at the Palo Alto and San Francisco hospitals between 1995 and 2010. The proposed Travis project would likely have a significant economic effect on other hospitals, particularly those in the Travis and Sacramento areas. As previously discussed, VA would need to generate about 6,500 additional hospital admissions in order to fill the new beds planned at Travis. The additional admissions would most likely come primarily from the Fairfield and Sacramento areas, because Oakland and Martinez are closer to VA hospitals in Palo Alto and San Francisco. As discussed above, community hospitals in the Fairfield area have occupancy rates of around 40 percent and those in the Sacramento area, about 68 percent. Similarly, to the extent referral patterns for the Oakland and Martinez clinics would be changed to encourage shifting patients from Palo Alto and San Francisco to newly expanded beds at Travis Air Force Base, excess capacity would be increased at Palo Alto and San Francisco. The number of veterans traditionally targeted by VA—primarily veterans with low incomes or service-connected disabilities—living near the Travis Air Force Base does not appear to be large enough to support an outpatient clinic as large as the one planned. The Travis area is less densely populated than areas where other VA clinics are located. Thus, to meet workload projections, the clinic would have to serve large numbers of higher-income veterans with no service-connected disabilities or attract veterans away from existing VA clinics. Existing VA clinics in Sacramento, Martinez, and Oakland generally draw veterans from one of two distinct markets: the Sacramento and East Bay areas. The proposed Travis outpatient clinic, which would be as large as VA’s Sacramento clinic and larger than the Oakland clinic, would serve primarily the area around Solano County. Solano County has fewer veterans than the counties where the existing clinics are located (see table 3). Although the Sacramento, Martinez, and Oakland clinics are crowded, they turn away no veterans seeking care, including higher-income veterans with no service-connected disabilities. The clinics reported that most of the veterans they serve are in the mandatory care category and have service-connected disabilities or low incomes. Moreover, with more clinic space, it would be possible to serve even more veterans. In effect, VA is planning to develop a new outpatient market in the area surrounding the Travis clinic. This market would comprise veterans residing in the northeastern part of the East Bay area and the southwestern part of the Sacramento area. VA’s facilities operate in a competitive market in northern California. According to public and private health care experts, convenience is an important factor in California residents’ choices of health care providers. Several NCHCS officials said that veterans could not reasonably be expected to travel more than 25 to 35 miles for care. Similarly, our review of VA patients using outpatient services in fiscal year 1993 showed that most VA clinic users live close to the clinic. Living within 5 miles of a VA clinic significantly increases the likelihood that a veteran will use VA health care services; nationwide, about 26 percent of veterans using VA outpatient services lived within 5 miles of a VA clinic, although only about 17 percent of veterans lived within 5 miles of a VA clinic. Moreover, about 68 percent of VA outpatient users lived within 25 miles of a VA clinic, and almost all lived within 100 miles. Accordingly, the proposed Travis outpatient clinic should draw users primarily from veterans living close to Fairfield. Because Travis is within 44 miles of the existing clinics at Martinez, Oakland, and Sacramento, however, the primary service area for the Travis clinic would actually be smaller. Veterans who live within 44 miles of both Travis and either Martinez, Oakland, or Sacramento would likely use the closest facility. Figure 4 shows the service area from which the proposed Travis clinic and existing Martinez, Oakland, and Sacramento clinics could expect to attract most of their users. The primary Travis service area does not appear to have enough veterans to support about 85,000 outpatient visits a year. The clinic would have to attract about 19 percent of all veterans living in the primary service area, compared with an average market share of 13 percent for other clinics. During fiscal year 1995, about 1,900 veterans living in the Travis primary service area used VA outpatient clinics. Many such veterans would probably begin using the Travis clinic because of added convenience. But even if all VA users who reside in the Travis primary service area shifted to the Travis clinic, the number would be too small to efficiently support the Travis project. Establishing a clinic at Travis could attract a number of veterans who had not previously used VA health care services. To support 85,000 visits, however, the clinic would need to attract about 12,000 users (based on 11,672 users who generated 83,151 visits at the Sacramento clinic). The veterans most likely to use VA health care services are those with low incomes or service-connected disabilities. Although the Travis clinic is being designed to provide roughly the same number of visits as the Sacramento clinic and more than the Oakland clinic, the number of veterans with low incomes in the proposed Travis service area is smaller. Over 37,800 veterans with incomes of less than $25,000 live in Sacramento County, 11,672 of whom used the Sacramento clinic in fiscal year 1995. Similarly, almost 31,800 veterans with incomes of less than $25,000 live in Alameda County, 6,457 of whom used the Oakland clinic that same year. The Travis clinic area has only about 10,300 veterans with incomes under $25,000 from whom to attract the estimated 12,000 users. Veterans with service-connected disabilities are the other main category of VA users. Because the overall veteran population in Solano County is roughly one-third of the veteran population in either Oakland or Sacramento, the Travis clinic will likely have fewer service-connected veterans from whom to attract its users. We could not readily obtain data on the number of veterans in each county who have service-connected disabilities. Nationwide, however, about 2.2 million of the 26.2 million veterans (8.4 percent) have compensable service-connected disabilities. If Solano County is representative of the distribution of veterans with service-connected disabilities nationwide, then about 3,600 of the approximately 43,000 veterans living in Solano County have service-connected disabilities. In contrast, an estimated 10,400 veterans with service-connected disabilities live in Sacramento County. If demand for VA hospital care increases, several alternatives are available that do not entail constructing additional beds at Travis. These options include converting the Air Force’s Mather hospital to VA use, expanding VA use of space at Travis Air Force Base, making greater use of excess capacity in existing VA hospitals, and expanding use of community hospitals. The Sierra Pacific Network is currently assessing the best way to deliver health care to veterans. The Congress’ decision on whether to fund the Travis hospital has significant implications for this planning effort. Between 1988 and 1995, the Defense Base Closure and Realignment Commission recommended closing several DOD hospitals in northern California, including Letterman Army Medical Center in San Francisco, the Naval Hospital in Oakland, and the Air Force’s Mather hospital near Sacramento. The Air Force currently operates a 105-bed hospital on the grounds of the former Mather Air Force Base, which is about 11 miles southeast of Sacramento. While physically located at Mather, the hospital is currently part of McClellan Air Force Base. DOD plans to close the Mather hospital by 2001. The planned closure provides VA the opportunity to acquire a fully functional hospital and outpatient clinic at a fraction of the cost of new construction at Travis. In addition, the facility is closer to the larger Sacramento-area veteran population and would alleviate the crowding at the existing Sacramento outpatient clinic. Because Mather is a small hospital, however, operating costs per patient treated may be high. VA has developed two primary options for potential use of the Mather hospital building: convert the building into an outpatient clinic and 91-bed hospital and convert the building into an outpatient clinic and use the second floor as an ambulatory surgery center. VA’s existing Sacramento outpatient clinic is overcrowded, and plans have been developed to build a larger facility. Building a replacement outpatient clinic in Sacramento that would provide 87,000 outpatient visits per year is estimated to cost about $32 million (excluding the cost of land). VA officials believe that using the existing Mather clinic may be a cost-effective alternative to new construction. NCHCS studied the Mather hospital to determine how to renovate the facility for use as a 91-bed VA hospital, outpatient clinic, and outpatient surgery center. Renovation would be required primarily to improve patient privacy, improve accessibility for the handicapped, and make safety and seismic improvements. VA officials said that the inpatient wards at the Mather hospital could be reconfigured into a 91-bed hospital that meets the handicapped-access needs of the veteran population. VA officials, working with an architectural design firm, estimate that the total cost of converting the Mather hospital into a fully functional hospital and outpatient clinic would be about $28 million. In addition, there would be start-up costs of about $11 million and increased annual operating costs of about $14 million. In addition to the hospital building, VA has developed plans for using several of the adjacent buildings. For example, one building would be used to house a mental health clinic—the current clinic is located in a strip mall across the street from the existing Sacramento clinic. In addition, VA officials indicated that they might be able to use a new warehouse at the Mather site to serve all VA hospitals and clinics in the Sierra Pacific Network. A potential drawback to using the Mather hospital as an inpatient facility is the cost of operating a small hospital. With small hospitals, the number of staff frequently exceeds the number of patients. A 91-bed hospital could be expected to have an average daily census of no more than 78 patients. The VISN director told us that, in the private sector, 150 beds is generally considered the break-even point for operating a hospital. Another VA official said that it is difficult to attract physicians to a small hospital because of the limited range of patients and services provided. The dean of the University of California at Davis medical school, however, did not see the size of the facility as a problem. Because of its proximity to UC-Davis, a hospital at Mather would, he said, be able to draw physicians and residents from the medical school. He said that Mather could be used for more routine hospitalizations and that specialized care could be provided at the University of California at Davis Hospital. NCHCS’ facility planner said that converting the Mather facility to only an outpatient clinic and ambulatory surgery center would cost about the same as converting the facility into a VA hospital, but annual operating costs would be less. Using the Mather hospital as a clinic would relieve crowding at the existing Sacramento clinic. In November 1995, VA sent a letter to the Air Force Base Conversion Agency expressing an interest in acquiring, at no cost, the Mather facility and a separate dental clinic located at McClellan Air Force Base. The proposal was endorsed by the Sacramento County Board of Supervisors and, on May 30, 1996, VA notified the Secretary of the Air Force of its intention to acquire the hospital. VA had concluded that acquiring the hospital, including required modifications, would be the most cost-effective alternative to building a new VA outpatient clinic in Sacramento. The Air Force has already received an appropriation of $10 million for fire, safety, and seismic improvements to the building. The Air Force informed VA that it will proceed with the improvements if it gets assurance from VA that the facility will be used as a hospital. As of August 1996, VA had not provided the Air Force assurance that the facility would be used as a hospital. The Air Force appears to have additional beds that could potentially be made available for VA use if the need arises. About 40 beds have been converted to office and other space. Moreover, further integration of Air Force and VA patient care services could provide VA access to additional beds. According to VA officials, one significant drawback to VA’s use of DGMC is the lack of office space for VA physicians. Typically, physicians spend only a portion of their day with hospitalized patients, using the rest of their day to see patients on an outpatient basis, complete paperwork, or conduct research. Because VA does not have an outpatient clinic at Travis and no physicians’ offices are available to VA, physicians’ options are limited. Both VA and Air Force officials agreed that it is less costly to build office space than hospital space. While VA has no immediate need for additional beds, if either VA or DOD demand for inpatient care increases in the future, additional office space could be built, and some or all of the space currently used for administrative purposes could be returned to patient care. Similarly, additional inpatient beds might be made available if some of the 75 beds in the aeromedical staging facility could be used to support the ambulatory surgery program. Both the Palo Alto and San Francisco VA medical centers have significant excess capacity that could be used to serve veterans, especially those from the East Bay. Some of these veterans live closer to Palo Alto or San Francisco than they do to Travis Air Force Base. The chief medical officer from the Oakland clinic said that the Palo Alto and San Francisco hospitals had approached him about referring more patients. The main hospital at Palo Alto was severely damaged in an earthquake, and a replacement acute care facility is under construction The replacement hospital, scheduled to open in 1997, will be virtually a bed-for-bed replacement for the bed tower damaged in the earthquake. It will include 228 medical/surgical beds, including 24 intensive care unit beds. Because of changes in medical practice, the medical center director estimates that the hospital will have about 100 excess medical and surgical beds when it opens. Moreover, the Menlo Park division of Palo Alto also has a number of empty beds. The division, which includes 118 psychiatric beds and a 100-bed drug and alcohol abuse unit, plans to reduce its operating beds by 50 percent. As a result, Palo Alto and its Menlo Park division will have sufficient excess capacity to accommodate the additional 60 psychiatric beds planned at Travis. The VA medical center at San Francisco also has excess capacity. The San Francisco medical center is authorized 240 beds and is currently staffed to operate 190 beds, with an average daily census of about 160 patients, including many who may require only subacute hospital or extended care. The medical center director said that the hospital has about 80 excess beds now and will likely have more in the future because the hospital’s workload has been steadily declining; the base closures in the Bay area have slowed the rate of decline, however, as military retirees with dual eligibility have sought care from VA after closure of DOD hospitals. Further, the San Francisco hospital is more convenient for some East Bay veterans because it is closer that the Travis Air Force Base, served by public transportation, or both. Thousands of unused beds are available in community hospitals in northern California. In the approximately 4 years since VA’s decision to build a replacement hospital at Travis Air Force Base was made, significant changes in the availability of beds in community hospitals have occurred. For example, a major hospital in the Martinez area—Merrithew—expressed interest in selling its excess capacity to VA. Another hospital in Martinez—Kaiser Permanente—plans to close.Similarly, four hospital systems based in Sacramento—Catholic, UC-Davis, Kaiser Permanente, and Sutter—have alliances with hospitals covering a wide geographic area in northern California. An alliance with one of these hospital systems might bring hospital care closer to veterans’ homes than would construction of a VA hospital at Travis. The potential for such an alliance is one of the alternatives being explored by the network. Although VA currently makes extensive use of contract hospitals to provide emergency services, it lacks authority to contract for routine hospital care for most veterans. VA has specific statutory authority (38 U.S.C. 1703) to contract for medical care when its facilities cannot provide necessary services because they are geographically inaccessible. VA also has authority (38 U.S.C. 8153) to enter into agreements “for the mutual use, or exchange of use, of specialized medical resources when such an agreement will obviate the need for a similar resource to be provided” in a VA facility. Specialized medical resources are equipment, space, or personnel that—because of their cost, limited availability, or unusual nature—are unique in the medical community. Neither statute authorizes VA to routinely provide hospital care through contracts with community facilities. As a result, VA cannot currently rely exclusively on contracting to meet any unexpected growth in the needs of veterans in the service area. VA is seeking to expand its legislative authority to contract for hospital and other health care services. Language that would expand its contracting authority was included in veterans’ health care eligibility reform legislation (H.R. 3118) passed by the House of Representatives on July 30, 1996. If enacted, contracting reforms would give VA considerable flexibility to contract with community hospitals. A number of basic contracting approaches could be used to obtain beds from community hospitals. First, VA could lease excess space in a community hospital and staff and operate its own beds, sharing certain services with the hospital. Second, VA could contract with a hospital to operate a set number of beds for veterans. Such contracts, however, involve certain risks because of the unknown demand for care. In other words, if VA overestimates demand, then its costs of providing care through contracting would increase. The third method of providing care though contracting would be to purchase care “on the margin,” paying for each hospital episode separately, as VA does now for emergency services. In May 1993, we testified before the Senate Committee on Veterans’ Affairs on the effects of changes in the health care system on VA’s major construction program. At that time, we suggested that VA consider seeking authority to use demonstration projects to test the feasibility of and best methods for contracting with community hospitals as an alternative to building VA hospitals that might never be used. One of the areas proposed for consideration as a demonstration site was the northern California area served by the former Martinez hospital. VA is developing plans for restructuring the way health care services are provided in the Sierra Pacific Network. The Congress’ decision on whether to fund the proposed Travis project has significant implications for the study. In 1995, VA provided its network directors proposed criteria to help identify opportunities for efficiencies. For example, the criteria suggest that directors use community providers (subject to current restrictions under the VA law) if the same kind of services of equal or higher quality are available at either lower cost or equal cost but in more convenient locations for patients. The criteria also encourage directors to use nearby VA facilities and to integrate or consolidate services if doing so would yield significant administrative or staff efficiencies. In addition, the Sierra Pacific Network director has established a task team consisting of facility directors to study the best way to deliver care in the network. The goal is to develop a short-term strategic plan (1 to 2 years) and a longer term strategic plan (3 to 5 years). These plans are to be completed in the fall of 1996, although the Sierra Pacific Network director said that final plans on how to best deliver care will not be complete until spring of 1997. The task team is studying current use rates for each facility in the network, the types of services available at each facility, where the patients live, and the cost and availability of services in the community. This study will likely address such potential service delivery alternatives as integrating hospitals; establishing new clinics; purchasing care through community providers; using the soon-to-be-closed Mather hospital for inpatient care, outpatient care, or both; and expanding the joint venture at Travis Air Force Base. It will be difficult for the network to recommend changes in facility missions, contracting with community providers, and hospital referral patterns until the Congress completes its deliberations on (1) funding the Travis project and (2) reforming VA health care contracting. VA’s plans to establish a 243-bed medical center at Travis Air Force Base—which include construction of 170 new hospital beds, renovation and expansion of existing Air Force support areas, and construction of an 85,000-visit outpatient clinic—are not justified on the basis of the current and expected workload and the availability of lower-cost alternatives. First, VA is meeting veterans’ needs with existing facilities. NCHCS clinics in Sacramento, Oakland, and Martinez, while crowded and operating at less than full efficiency, are meeting inpatient and outpatient needs and turning away no veterans. Second, the decision to build at Travis was driven by VA’s 1992 assessment of veterans’ health care needs in northern California, which relied on assumptions concerning the future availability and use of hospital beds that are no longer valid. To support the number of beds VA plans to build at Travis, VA would need to more than triple the number of veterans now served there under the joint venture with the Air Force. VA’s ability to attract such a large supply of new users is questionable, however, given the large supply of unused hospital beds in VA, Air Force, and private hospitals; the decreasing veteran population; and the shifting of medical care from inpatient to outpatient settings. Such uncertainties subject VA to the risk of spending federal dollars to build a hospital that will have a large supply of beds that may never be used. Third, alternatives to the construction project could meet any increase in demand for hospital care without incurring the risk of spending hundreds of millions of dollars to build and operate hospital beds that are unlikely to ever be used. VA has many more efficient alternatives to serve northern California veterans. For example, it might be able to obtain use of additional beds from the Air Force at DGMC or to obtain the Mather hospital from the Air Force when McClellan Air Force Base is closed. Similarly, it could change hospital referral patterns for its existing clinics, especially the Oakland and Martinez clinics, to send more hospital patients to Palo Alto and San Francisco to take advantage of existing excess capacity. Finally, if VA had the legislative authority, it could expand contracting with community hospitals in order to provide veterans access to hospital care closer to their homes and at the same time strengthen the financial viability of community hospitals, especially those operating at less than 50-percent occupancy. Pursuing such alternatives before spending hundreds of millions of dollars to build and operate a new VA hospital appears consistent with VA’s new network planning strategy in that it would help maintain the viability of existing VA hospitals. Without such planning, the existing VA hospitals’ viability may be jeopardized by declining workloads associated with shifting veterans to the new Travis hospital. Although construction of outpatient facilities at Travis Air Force Base appears justified to support the existing VA beds, there do not appear to be enough veterans in the primary area to be served by the clinic to support a clinic of the size that VA plans. In addition, if VA obtains and converts the Mather hospital into a clinic and ambulatory surgery center, or constructs a new outpatient clinic in Sacramento, the ability of the Travis clinic to attract veterans from the Sacramento area would likely be diminished. The clinic needs of veterans in the Travis area could be met with less clinic space than VA included in the proposed Travis project, and VA could build the smaller clinic with the flexibility to expand if necessary. We recommend that the Congress deny VA’s request for funds to construct additional hospital beds at Travis Air Force Base, given the availability of cost-effective alternatives to meet the health care needs of veterans in the NCHCS. The Congress may also wish to consider directing VA to spend only part of existing appropriated funds to construct a smaller outpatient clinic designed to provide considerably fewer than 85,000 visits a year. Moreover, the Congress could direct VA to delay expenditure of the remaining appropriated funds for the Travis facility until VA’s ongoing network study is completed. VA’s study provides the opportunity to identify lower-cost alternatives to meet veterans’ needs, including outpatient clinic improvements for veterans living in Oakland or acquisition and renovation of the Mather hospital for VA use as an inpatient or outpatient facility. VA’s study could also determine the highest-priority needs and, if necessary, justify congressional approval to spend all or a portion of the existing appropriations to meet any higher-priority needs identified through the study. Because VA does not currently have legislative authority to contract for routine hospital care, it cannot take full advantage of the excess hospital capacity in Northern California to meet the hospital care needs of veterans closer to where they live. Therefore, if proposed legislation to expand VA’s contracting authority is not enacted, the Congress may want to consider authorizing VA to conduct a demonstration project in northern California to assess the benefits and costs of VA’s purchasing care for veterans with urgent and nonemergent conditions from community providers. We requested comments on a draft of this report from the Department of Veterans Affairs, but none were received in time to be included in the report. We are sending copies of this report to the Speaker of the House; the President of the Senate; the Ranking Minority Member of the Subcommittee on VA, HUD, and Independent Agencies, Senate Committee on Appropriations; the Chairman and Ranking Minority Member of the Subcommittee on VA, HUD, and Independent Agencies, House Committee on Appropriations; the Chairmen and Ranking Minority Members of the Senate and House Committees on Appropriations; and the Chairmen and Ranking Minority Members of the Senate and House Committees on Veterans’ Affairs. Copies of the report are also being provided to Members of congressional delegations from the affected portions of northern California. We are also sending copies to the Secretaries of Veterans Affairs, Defense, and the Air Force; the Director, Office of Management and Budget; and other interested parties. Copies will be made available to others upon request. This report was prepared under the direction of David P. Baine, Director, Veterans’ Affairs and Military Health Care Issues, who can be reached at (202) 512-7101. You may also call Mr. Paul Reynolds at (202) 512-7109 or Mr. James Linz at (202) 512-7110 if you or your staff have questions concerning this report. Other evaluators who made contributions to this report include Byron Galloway, Deena El-Attar, Joan Vogel, John Borrelli, John Kirstein, Paul Wright, and Ann McDermott. Janet L. Shikles Assistant Comptroller General Health, Education, and Human Services Division The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Department of Veterans Affairs' (VA) planned construction of an outpatient clinic and additional bed space at the David Grant Medical Center at Travis Air Force Base, focusing on whether: (1) the project could be adequately justified; and (2) there are cost-effective alternatives to planned hospital construction. GAO found that: (1) construction of additional hospital beds and an outpatient clinic as large as VA proposes at Travis Air Force Base is unnecessary; (2) significant changes have occurred in the health care marketplace and in the way VA delivers health care in the 4 years since the project was planned, but VA plans have not been revised accordingly; (3) these changes alone have resulted in over 3,300 unused hospital beds in northern California hospitals, including beds in VA, Air Force, and community hospitals; (4) in addition, the veteran population in the service area is expected to drop by about 25 percent between 1995 and 2010; (5) VA has not considered the likely negative effects the additional beds could have on other hospitals in northern California, particularly those community hospitals in the Solano County area surrounding Travis Air Force Base that have occupancy rates of around 40 percent; (6) data GAO obtained show that VA is currently meeting the health care needs of veterans served by the Northern California Health Care System; (7) with VA hospitals at Palo Alto, San Francisco, and Travis operating below capacity, VA clinics have no trouble placing patients needing hospital care; (8) also, while VA's four clinics in the area intended to be served by the Travis hospital are operating at close to full capacity, three have turned away no veterans needing hospital or outpatient care; (9) in addition, the clinics have effectively used community hospitals for medical emergencies; (10) VA officials pointed out, and GAO's visits confirmed, that space constraints, such as the lack of sufficient numbers of examining rooms, prevent them from operating as efficiently as they could otherwise; (11) GAO identified several more efficient alternatives that are available to VA if increased demand for hospital care should materialize; (12) VA officials in the Sierra Pacific Network are currently studying the best way to meet veterans' future health care needs; and (13) network officials are considering options to make better use of VA facilities and increase the use of private and other public facilities.
The financial services industry is a major source of employment in the United States. EEOC data we obtained and analyzed showed that financial services firms we reviewed for this work employed more than 2.9 million people in 2011. We defined the financial services industry to include the following sectors: depository credit institutions, which include commercial banks, thrifts (savings and loan associations and savings banks), and credit unions; holdings and trusts, which include investment trusts, investment companies, and holding companies; nondepository credit institutions, which extend credit in the form of loans and include federally sponsored credit agencies, personal credit institutions, and mortgage bankers and brokers; the securities sector, which is composed of a variety of firms and organizations that bring together buyers and sellers of securities and commodities, manage investments, and offer financial advice; and the insurance sector, including carriers and insurance agents that provide protection against financial risks to policyholders in exchange for the payment of premiums. We previously conducted work on the challenges faced in the financial sector for promoting and retaining a diverse workforce, focusing on private-sector firms.management level in the financial services industry did not change substantially from 1993 through 2008 and that diversity in senior positions was limited. We also found that without a sustained commitment among financial services firms to overcoming challenges to recruiting and retaining minority candidates and obtaining “buy-in” from key employees, limited progress would be possible in fostering a more diverse workplace. In 2010, we reported that overall diversity at the In a 2005 report, we defined diversity management as a process intended to create and maintain a positive work environment that values individuals’ similarities and differences, so that all can reach their potential and maximize their contributions to an organization’s strategic goals and objectives. We also identified a set of nine leading diversity management practices that should be considered when an organization is developing and implementing diversity management. They are (1) commitment to diversity as demonstrated and communicated by an organization’s top leadership; (2) the inclusion of diversity management in an organization’s strategic plan; (3) diversity linked to performance, making the case that a more diverse and inclusive work environment could help improve productivity and individual and organizational performance; (4) measurement of the impact of various aspects of a diversity program; (5) management accountability for the progress of diversity initiatives; (6) succession planning; (7) recruitment; (8) employee involvement in an organization’s diversity management; and (9) training for management and staff about diversity management. Section 342 of the Dodd-Frank Act required specific federal financial agencies and Reserve Banks each to establish, by January 21, 2011, an OMWI, responsible for matters relating to diversity in management, employment, and business activities.agencies. Diversity has remained about the same at the management level in terms of the representation of both minorities and women, while industry representatives noted the continued use of leading diversity practices and some challenges. According to EEOC data, the representation of minorities at the management level stood at 19 percent in 2011. The representation of women in management remained at about 45 percent, according to EEOC data. The nine leading diversity practices that we previously identified in 2005 remain relevant today, according to industry representatives with whom we spoke. Industry representatives also noted some challenges, such as the difficulty in recruitment because of a limited supply of diverse candidates. At the overall management level, the representation of minorities increased from 17.3 percent to 19 percent from 2007 through 2011 according to EEOC data we obtained, which are reported by financial services firms (see fig. 1). While this is not a substantial increase, it shows a continued upward trend from our 2006 report, in which data showed that management-level representation by minorities increased from 11.1 percent to 15.5 percent from 1993 through 2004. The representation of minorities varied among management positions, which EEOC splits into two categories: (1) first- and mid-level officials and managers and (2) senior-level officials and managers. In 2011, the representation of minorities among first- and mid-level managers stood at 20.4 percent, about 1 percentage point higher to the representation of minorities among all management positions, according to EEOC data (see fig. 2). In contrast, at the senior management level, representation of minorities was 10.8 percent in 2011, about 8 percentage points below their representation among all management positions; yet representation of minorities in first- and mid-level management positions consistently increased from 18.7 percent to 20.4 percent over the 5-year period. First- and mid-level management positions may serve as an internal pipeline in an organization through which minority candidates could move into senior management positions. Similar to the total representation of minorities across all management positions, specific races/ethnicities have not changed significantly, but EEOC data show slight variations of representation for specific races/ethnicities. For example, the representation of African Americans decreased from 6.5 percent in 2007 to 6.3 percent in 2011, according to EEOC data (see fig. 3). In contrast, representation of most other races/ethnicities increased, and the highest increase was in the representation of Asians, from 5.4 percent to 6.5 percent over the same time period. From 2007 to 2011, the representation of African Americans went down in both management levels, while the representation of other specific races/ethnicities either increased or remained stable (see fig. 4). At the senior management level, the representation of Asians remained stable at about 4.1 percent from 2007 through 2011. However, the representation of African Americans in senior management positions decreased from 3.1 percent to 2.7 percent, and the representation of Hispanics increased from 3 percent to 3.3 percent. Among first- and mid-level management positions, the representation of Asians increased from 5.6 percent to 6.9 percent and the representation of Hispanics increased from 5.2 percent to 5.5 percent, while the representation of African Americans decreased from 7.2 percent to 6.9 percent. Over the same 5-year period, the representation of women at the management level remained at about 45 percent in EEOC data, which show a slight decrease from 45.1 percent to 44.7 percent (see fig. 5). In 2006, we reported an increase with representation of women at about 42.9 percent in 1993 to about 45.8 percent in 2004. Among all women in management positions, EEOC data showed that the representation of minority women increased, from 20.4 percent to 22 percent over the same 5-year period (see fig. 6). In addition, EEOC data show that the representation of minority men increased from 14.8 percent to 16.6 percent. Among first- and mid-level management positions, the representation of women has been at about 48 percent, slightly higher than the representation of women among all management positions. In contrast, women represented about 29 percent of all senior management positions from 2007 through 2011—about 16 percentage points below the representation of women for all management positions, according to EEOC data (see fig. 7). Based on EEOC data, minority women had greater representation at the first and mid levels of management compared to the senior level over the 5-year period. As shown in figure 7, among female senior managers, representation of minority women remained at about 13 percent over the 5-year period. In contrast, among female first- and mid-level managers, the proportion of minority women increased during the same period from 21.2 percent to 22.9 percent. The representation of minorities increases for both women and men as the firm size increases (see fig. 8). For example, in 2011 the representation of minorities at firms with 100-249 employees was about 18 percent among women and about 12 percent among men, while at firms with more than 1,000 employees, the representation of minorities was about 23 percent among women and about 17 percent among men. For additional analysis of EEOC data by workforce position and industry sector, see appendix II. A survey of the general population shows some similar trends in the representation of both women and minorities in the financial services industry. The CPS is administered by the Bureau of the Census for the Bureau of Labor Statistics and is a monthly survey of about 60,000 households across the nation. The CPS is used to produce official government figures on total employment and unemployment issued each month. According to the CPS data, from 2007 through 2011 the representation of women at the management level decreased from an estimated 49.1 percent to 47.3 percent. In addition, CPS data show a smaller increase from an estimated 14.1 percent to 15.1 percent in the representation of minorities in management over the same 5-year period. The nine leading diversity practices that we previously identified in 2005 are still relevant today, according to industry representatives with whom we spoke. Some industry representatives highlighted practices among these nine that they considered the most important to foster diversity and inclusion in their organizations. For example, top leadership commitment drives the other eight leading diversity practices, according to 9 of 10 industry representatives. In addition, accountability helps to promote the implementation of the other leading diversity practices because an issue is more likely to be addressed if it is tracked, according to 2 industry representatives. Moreover, creating awareness of the benefits of diversity for an organization among management and employees is important because it increases commitment to further the diversity goals of the organization, according to 7 industry representatives whom we interviewed.some firms that do not see the importance of diversity. In addition, 2 industry representatives said these 9 leading diversity practices should be expanded beyond workforce management to include, for example, an organization’s contracting efforts. However, 1 industry representative told us there are still Some industry representatives also noted that measuring the impact of various diversity practices is an important practice but that it can also be challenging; for example, it can be difficult to link specific practices to diversity outcomes and it can be a long-term process. According to some industry representatives, financial services organizations may measure the effectiveness of their diversity practices by assessing attrition, recruiting, and promotion rates, which are similar to measures we had previously reported. For example, a financial services organization may measure the proportion of certain minority groups or women in its workforce or among its promotions to determine the effectiveness of its practices. Further, financial services firms may use surveys to gather employee perspectives on workforce diversity issues in the organization, such as perceived fairness in the promotion process or factors that affect an employee’s decision to remain with the firm, among other topics. Additional diversity practices identified by some industry representatives that can support the leading diversity practices include the following: Sponsor individuals. Sponsorship of women within an organization where an executive acts as a guide to help women navigate the organization and expand their networks is an important diversity practice, according to three industry representatives. This sponsorship practice goes beyond the mentoring programs we previously reported in 2006, as a sponsor acts as an advocate to help the individual advance within the organization. Address biased perceptions. One industry representative told us about an effort to combat unconscious bias in promotions. They described a promotion system designed to address biased perceptions, such as a view of leaders as being typically male. According to the industry representative, the firm that employed this diversity practice gathered complete and objective evaluations of employees and trained its managers to recognize and address these perceptions. The result was that the firm promoted greater numbers of women into management. No industry representatives that we contacted reported changes to diversity practices as a result of the challenges faced by many firms during the financial crisis. Although representation of minorities and women has remained about the same from 2007 through 2011, according to some industry representatives, the industry continues to be focused on diversity. However, three industry representatives did cite specific instances where funding was scaled back as a result of the recent financial crisis. One industry representative told us that investment in training programs was reduced across the organization, but when a measureable impact on employees was identified at this organization, steps were taken to address the impact. Some industry representatives cited challenges to achieving a diverse workforce in general. We have previously reported some of these challenges, which can affect some of the leading diversity practices. Six industry representatives said that diversity recruitment is difficult because the supply (or “pipeline”) of minority and women candidates is limited. This has been a consistent challenge that we previously reported in 2006 and 2010. Available data indicate that for the internal pool of potential candidates for some management positions, representation of women varied, while representation of minorities was higher in every nonmanagement category compared to management positions (see fig. 9). For example, in 2011 the representation of women was greater in professional positions (about 51 percent) compared to sales positions (about 38 percent). In addition, the representation of minorities was higher in all nonmanagement positions than at the management level in 2011, but especially higher in technical and clerical positions at more than 29 percent in both types of positions. Further analysis of diversity in various workforce positions can be found in appendix II. In recent years, representation in business graduate programs, a potential source of future managers in the financial industry, has remained stable for women and has increased slightly for minorities, but representation is still low for both women and minorities when compared to the overall representation of students in the university system. To assess one possible external pool of candidates for financial services firms, we obtained data from the Association to Advance Collegiate Schools of Business (AACSB) on the number of students enrolled in Master of Business Administration (MBA) degree programs in AACSB member schools in the United States from 2007 through 2011 as well as the number of students in the university system.data, the representation of women remained constant over this period, while the representation of minorities increased. For example, the representation of women among MBA students remained at about 37 percent over the 5-year period, while representation of women was slightly higher in the overall university system at about 41 percent. In contrast, as table 2 shows, the representation of minorities increased among MBA students from about 26 percent in 2007 to about 29 percent in 2011. However, when compared to the university system, representation of minorities in the overall university system was slightly higher from about 29 percent in 2007 to 34 percent in 2011. Since the financial crisis, senior management-level minority and gender diversity at the federal financial agencies and Reserve Banks has varied across individual entities. The representation of minorities at the senior management-level increased slightly overall at both the agencies and Reserve Banks. In addition, the representation of women at the senior management-level increased slightly overall for both the agencies and Reserve Banks. Agency and Reserve Bank officials identified key challenges to increasing workforce diversity overall and at the senior management-level, including limited representation of minorities and women among internal and external candidate pools. Senior management-level representation of minorities and women varied across individual federal financial agencies and the 12 Reserve Banks. The agencies included FDIC, the Federal Reserve Board, NCUA, OCC, and Treasury. Complete data for this period were not available for CFPB, FHFA, and SEC, and we excluded these agencies from our analysis of changes in senior management-level diversity from 2007 through 2011, but provide recent data when available. Data for each agency are provided in appendix IV. CFPB assumed responsibility for certain consumer financial protection functions in July 2011 and has not yet reported workforce information to EEOC. However, we received recent employment profile data from CFPB as of May 2012. FHFA, which was established in July 2008, started reporting workforce data for 2010; while our analysis provides 2010 and 2011 data for FHFA, our analysis across the agencies excludes FHFA from aggregated totals. SEC reported data for 2007 through 2011, but revised how it reported officials and managers during the 5-year period; while our analysis provides 2011 senior management-level data for SEC, we excluded SEC from our senior management-level trend analysis. In our review of agency reports, we found that from 2007 through 2011, the representation of minorities among senior management-level employees, when aggregated across FDIC, the Federal Reserve Board, NCUA, OCC, and Treasury, increased slightly, from 16 to 17 percent for the agencies combined (see fig. 10). From 2007 through 2011, three agencies—FDIC, the Federal Reserve Board, and Treasury—showed an increase in the representation of minorities at the senior management- level, by between 1 and 3 percentage points. Two agencies—NCUA and OCC—experienced no percentage point change in their representation of minorities at the senior management-level from 2007 through 2011. In 2011, the representation of minorities among senior management-level employees of these agencies, FHFA, and SEC ranged from 11 percent at SEC to 24 percent at FHFA. Additionally, CFPB employment data showed about 28 percent representation of minorities among senior officials as of May 2012. In our review of EEO-1 reports provided by the Reserve Banks, we found that the representation of minorities among senior management-level employees in aggregate across the 12 Reserve Banks increased from 11 percent to 14 percent from 2007 through 2011 (see fig.11). The population of senior management-level employees at each bank in 2011 ranged from 9 employees at the Reserve Banks of Chicago, Dallas, and Minneapolis, to 59 employees at the Reserve Bank of New York, and the population of minority senior management-level employees at each bank ranged from zero employees at the Reserve Bank of Cleveland to 7 employees at the Reserve Bank of New York. Specific information on each Reserve Bank is provided in appendix IV. In general, the representation of women at the senior management-level increased slightly since the beginning of the financial crisis in 2007 at agencies, but representation percentages varied for each entity. In our review of agency reports, we found that from 2007 through 2011, the representation of women at the senior management-level increased slightly from 34 to 36 percent across FDIC, the Federal Reserve Board, NCUA, OCC, and Treasury, in aggregate (see fig. 12). Changes varied by agency, from a decrease of 5 percentage points at OCC to an increase of 5 percentage points at NCUA. Four of the five agencies—FDIC, the Federal Reserve Board, NCUA, and Treasury—showed an increase of between 3 and 5 percentage points in the representation of women at the senior management-level from 2007 through 2011. In 2011, the representation of women among senior management-level employees ranged among the agencies from 31 percent at FDIC to 47 percent at FHFA. Additionally, CFPB employment data showed the representation of women among senior officials at about 35 percent as of May 2012. In our review of EEO-1 reports provided by the Reserve Banks, we found that from 2007 through 2011, the representation of women at the senior management-level increased from 32 percent to 38 percent for the Reserve Banks, in aggregate (see fig. 13). As mentioned previously, the population of senior management-level employees at each bank in 2011 ranged from nine employees at the Reserve Banks of Chicago, Dallas, and Minneapolis, to 59 employees at the Reserve Bank of New York. The population of women among senior management-level employees at each bank in 2011 ranged from two employees at the Reserve Bank of Boston to 25 employees at the Reserve Bank of New York. Specific information on each Reserve Bank is provided in appendix IV. Several agencies reported on existing diversity practices related to retaining and promoting employees to build management-level diversity. For example, according to agency reports, some Treasury offices conduct formal mentoring programs, and the Federal Reserve Board has customized mentoring programs within its divisions, which in conjunction with a leadership exchange program sponsored by the Federal Reserve System, provide employees opportunities to develop new skills and experiences. Further, OCC reported having development programs for employees within its bank supervision division that provide leadership and development opportunities to staff, and agency-sponsored employee network groups implemented mentoring circles to assist in the career development and retention of the agency’s workforce. Several Reserve Banks identified practices targeted to improve management-level diversity, including changes to hiring practices and mentoring programs. For example, officials from several Reserve Banks we contacted said their organizations revised their hiring policies to open all management-level positions to external applicants in addition to current employees as a way to build management-level diversity by hiring diverse, experienced candidates from outside the organization. Additionally, the Reserve Banks of Dallas and New York began piloting new mentoring programs in 2011, and each planned to expand its program based on initial feedback its OMWI had received. These banks and several others with existing mentoring programs reported that mentoring programs were important to retaining and developing minorities and women within their organizations. Later in this report, we provide additional information on the agencies’ and Reserve Banks’ recruitment practices as part of their efforts to implement section 342 of the Dodd- Frank Act. Based on our analysis of minority and gender diversity at all levels from 2007 through 2011, workforce diversity varied at the federal financial agencies and Reserve Banks, with slight decreases in aggregate. Specifically, the representation of minorities decreased slightly from 31 percent to 30 percent from 2007 through 2011 across FDIC, the Federal Reserve Board, NCUA, OCC, SEC, and Treasury, in aggregate. Additionally, CFPB employment data showed the representation of minorities of all agency employees at about 33 percent as of May 2012. Three agencies—NCUA, OCC, and SEC—showed a 1 percentage point or greater increase in the overall representation of minorities during the 5- year period, according to agency reports. In 2011, the representation of minorities at the agencies ranged from 25 percent at NCUA to 44 percent at the Federal Reserve Board. Our analysis of EEO-1 reports provided by the Reserve Banks for 2007 through 2011 showed that the representation of minorities across the Reserve Banks declined slightly in aggregate, from 38 percent to 36 percent. The Reserve Banks of Minneapolis and New York showed a 2 percentage point increase in the overall representation of minorities working at Reserve Banks, the Reserve Bank of Boston showed no percentage point change, and the remaining nine banks showed decreases of 1 to 8 percentage points. In 2011, the representation of minorities at the Reserve Banks ranged from 16 percent at the Reserve Bank of Kansas City to 53 percent at the Reserve Bank of San Francisco. Similarly, we found that overall gender diversity varied at individual agencies and Reserve Banks, and generally declined slightly from 2007 through 2011. The overall representation of women in the workforce aggregated across FDIC, the Federal Reserve Board, NCUA, OCC, SEC, and Treasury declined slightly from 47 percent to 45 percent over the 5- year period. Additionally, CFPB employment data showed the representation of women of all agency employees at about 49 percent as of May 2012. Two agencies—NCUA and SEC—showed no percentage point change in the representation of women during the 5-year period; OCC showed a decrease of about 1 percentage point, and the other three agencies—FDIC, the Federal Reserve Board, and Treasury— experienced decreases of 2 percentage points. In 2011, the representation of women among all employees at the agencies ranged from 42 percent at FDIC to 48 percent at SEC and Treasury. The overall representation of women across the Reserve Banks, in aggregate, declined from 49 percent to 45 percent from 2007 through 2011. All Reserve Banks showed declines in the representation of women among all employees during the 5-year period, ranging from a 1 percentage point decrease at the Reserve Bank of New York to a 7 percentage point decrease at the Reserve Bank of Cleveland. For example, in 2007, 827 of the Reserve Bank of Cleveland’s 1,568 employees were women, and in 2011, 500 of the bank’s 1,094 employees were women; the bank’s workforce changed from having around 53 percent women employees to about 46 percent women employees. In 2011, the overall representation of women at Reserve Banks ranged from 40 percent at the Reserve Banks of Philadelphia and Richmond to 53 percent at the Reserve Bank of Minneapolis. See appendix III for additional information on the overall workforce representation for the agencies and Reserve Banks. According to officials from five Reserve Banks and the Federal Reserve Board, consolidation of check processing and other operations, some of which occurred since the financial crisis, had eliminated many administrative and service worker positions. Since these positions are often held by minorities and women, these consolidations affected overall employment diversity at affected Reserve Banks. In response to declines in the use of paper checks and greater use of electronic payments, the Reserve Banks took steps beginning in 2003 to reduce the number of locations where paper checks were processed. In 2001, the Federal Reserve System employed around 5,500 people in check processing functions across 45 locations, and in 2008, around 2,800 employees supported check processing functions across 18 locations. By 2010, one paper check processing site remained in Cleveland, along with an electronic check processing site in Atlanta. As of January 2013, approximately 480 employees supported check processing functions across the Federal Reserve System. The Federal Reserve System is projected to complete its consolidation of check processing functions in 2013. OMWI officials described challenges to building workforce diversity both at the management level and overall. Four agencies—FDIC, the Federal Reserve Board, FHFA, and OCC—and three Reserve Banks—the Reserve Banks of Chicago, Minneapolis, and St. Louis—cited underrepresentation of minorities and women within internal candidate pools as a challenge to building management-level diversity, as many management-level positions are filled through promotions or internal hiring processes. Additionally, the Reserve Banks of Dallas, Minneapolis, Philadelphia, and San Francisco said low turnover was a challenge to increasing their management-level diversity profiles because it limited opportunities to increase organizational diversity through hiring and promotion. Federal financial agencies and Reserve Banks identified other challenges to building workforce diversity generally. The Reserve Banks of Atlanta, Boston, Chicago, Kansas City, and St. Louis cited competition from the private sector for recruiting diverse candidates as a challenge. In addition, FHFA and the Reserve Banks of Cleveland, Philadelphia, and San Francisco cited limited representation of minorities within external candidate pools as another challenge. The Federal Reserve Board and the Reserve Banks of Chicago and Kansas City reported that the availability of external candidates could be an issue in particular for hiring certain specialized positions, such as economists, which would involve a small candidate pool with limited representation of minorities. Additionally, three Reserve Banks identified geographic impediments to their national recruitment efforts, explaining that it is difficult to attract candidates from outside their region. For example, the Reserve Banks of Kansas City and St. Louis said it was difficult to recruit candidates lacking ties to the central United States, and the Reserve Bank of San Francisco cited difficulty recruiting from the eastern United States. Further, several agencies and Reserve Banks identified other challenges to building workforce diversity. For example, Treasury cited budget constraints on hiring and the Reserve Bank of Cleveland cited time constraints on recruitment practices as challenges. Additionally, NCUA cited as a challenge establishing tracking systems to help identify barriers to recruiting, hiring, and retaining minorities. Federal financial agencies and Reserve Banks have begun implementing key requirements of section 342 of the Dodd-Frank Act. First, all agencies and Reserve Banks have established OMWIs. Most agencies and all of the Reserve Banks used existing policies to establish standards for equal employment opportunity required by the act. Although many agencies and Reserve Banks had been using recruitment practices required by the act prior to its enactment, the majority of OMWIs have expanded these or initiated other practices. In addition to meeting requirements regarding their diversity policies, the federal financial agencies have taken preliminary steps to develop procedures for assessing the diversity policies and practices of entities they regulate, as required under the act. Finally, nearly all the agencies and all of the Reserve Banks are reporting annually on their diversity practices. While many OMWIs have implemented or are planning efforts to measure and evaluate the progress of their diversity and inclusion activities, information on such efforts is not yet reported consistently across the OMWI annual reports. Such information could enhance their efforts to report on measuring outcomes and the progress of their diversity practices. All federal financial agencies and all Reserve Banks have established an OMWI. Six of the seven agencies that existed when the Dodd-Frank Act was enacted established OMWIs by January 2011, pursuant to the time frame established in the act. Additionally, SEC formally established its OMWI in July 2011, following House and Senate Appropriations Committees’ approvals of the agency’s request to create an OMWI. SEC selected an OMWI director in December 2011, who officially joined the office in January 2012. CFPB, which assumed responsibility for certain consumer financial protection functions in July 2011, established its OMWI in January 2012 and its OMWI director officially joined the agency in April 2012. Many agencies and most of the Reserve Banks established their OMWIs as new, separate offices. Four of eight agencies and 9 of 12 Reserve Banks established their OMWIs separate from other offices, including four banks that refocused existing diversity offices as their OMWIs. Three agencies—FDIC, the Federal Reserve Board, and OCC—and three banks—the Reserve Banks of Atlanta, Kansas City, and Philadelphia— established their OMWIs within existing offices of equal employment opportunity (EEO) or diversity. FHFA established its OMWI and then merged its EEO function into that office. OMWI officials from several agencies with separate OMWIs said their staff worked with their EEO offices to address agency diversity issues. Similarly, many agency and Reserve Bank OMWI officials said they coordinated with other offices across their organizations, such as human resources, recruiting, procurement, and management, to support ongoing diversity and inclusion efforts organizationwide. Federal financial agencies and Reserve Banks all have taken steps to staff their OMWIs. As of January 2013, the agencies had allocated between 3 and 40 full-time equivalent positions to their OMWIs (see table 3), and all agencies had open positions they planned to fill among these allocated positions. FDIC had allocated 40 full-time equivalent positions to its combined OMWI/EEO office as of January 2013. Many of FDIC’s OMWI staff, including eight EEO specialists, support the office’s EEO functions, and OCC and FHFA also reported EEO specialists among their staff. The agency OMWIs included directors and analysts among their staff, as well as some positions specific to certain functions of the OMWIs. For example, four of the agencies—CFPB, FDIC, NCUA, and SEC—had allocated staff specifically to recruitment and outreach functions, and four of the agencies—NCUA, OCC, SEC, and Treasury— had allocated staff specifically to business and supplier diversity. Four agencies—the Federal Reserve Board, FHFA, NCUA, and OCC—had each allocated a position to help implement the Dodd-Frank Act requirement to review the diversity practices of regulated entities. Additionally, two of the agencies—CFPB and SEC—had attorney positions among their OMWI staff. The Reserve Banks had allocated between three and seven full-time equivalent positions to their OMWIs as of January 2013 (see table 4). Ten of the 12 Reserve Banks had filled all of these positions, while the Reserve Banks of Cleveland and St. Louis each had one open position. The Reserve Bank OMWIs included directors and analysts among their staff. Few Reserve Banks designated specific OMWI functions to certain positions. Three banks, the Reserve Banks of Atlanta, Boston, and St. Louis, had each allocated one position to supplier or business diversity, and two other banks, the Reserve Banks of Chicago and Cleveland, had each allocated one position to help carry out the reporting functions of the OMWIs. Perspectives on the role of OMWIs varied across some Reserve Bank officials with whom we spoke. While several Reserve Bank officials said their OMWIs were involved in policy development with a commitment to improving the Reserve Bank’s diversity efforts over time, officials from one Reserve Bank said their OMWI was compliance-focused and primarily analyzed the banks’ human capital resources and recruiting functions for compliance with Dodd-Frank Act requirements. Reserve Bank of Dallas officials told us they considered the OMWI staff members as objective critics of the Reserve Bank’s recruitment, procurement, and financial education efforts, and that bank management is responsible for fostering diversity and inclusion across the organization. The act also required federal financial agency and Reserve Bank OMWIs to develop standards for equal employment opportunity and the racial, ethnic, and gender diversity of the workforce and senior management. Six of eight agencies and most Reserve Banks indicated either their previously established equal employment opportunity standards or MD- 715 requirements for agencies helped satisfy the Dodd-Frank Act requirement to establish equal employment opportunity standards with minimal changes, while two agencies and one Reserve Bank were still determining how to respond to the requirement. Treasury and CFPB planned to develop benchmarks of best practices as standards for diversity and inclusion. For example, Treasury officials said they planned to identify qualitative measures or indicators for assessing workforce diversity practices. Additionally, the Reserve Banks of Kansas City and San Francisco revised their diversity and inclusion policies pursuant to Dodd-Frank Act requirements. One agency established new standards separate from its existing equal employment opportunity policies as standards for the diversity of the workforce and senior management. Specifically, NCUA developed a diversity and inclusion strategic plan in response to a government-wide executive order that provides diversity standards and goals, which officials said the agency used to help establish expectations for staff. OMWI Annual Reports to Congress and officials we contacted indicated that federal financial agencies and Reserve Banks have implemented various practices pursuant to the Dodd-Frank Act’s requirements regarding diversity recruiting, outlined in table 5. Most agency and Reserve Bank OMWIs indicated that they had been conducting various diversity recruitment practices prior to the enactment of the Dodd-Frank Act—such as partnering with organizations focused on developing opportunities for minorities and women. The majority of agencies and Reserve Banks focused their recruitment efforts on attending job fairs and maintaining partnerships with minority- serving institutions and organizations. According to Federal Reserve Board and Reserve Bank officials, they collectively participate in and fund recruitment activities, including national career fairs, advertisements in diverse publications, and social media initiatives. The Reserve Bank of Chicago coordinates the Federal Reserve System’s participation in national diversity recruitment events and oversees an internal training initiative aimed at developing and retaining employees within the Federal Reserve System. In addition to participating in these efforts, Reserve Banks conduct some activities independently. Some OMWIs indicated their diversity activities had changed due in part to recent efforts to satisfy section 342 requirements as well as broadening their approaches to diversity and inclusion. For example, some OMWIs indicated the scope of their diversity and inclusion practices had broadened to include persons with disabilities as well as the lesbian, gay, bisexual, and transgender community. Further, the majority of OMWIs reported on plans to improve or expand existing practices. For example, many OMWIs described plans to pursue new or further develop existing partnerships with organizations focused on developing opportunities for minorities and women, and some OMWIs described recent efforts to expand internship opportunities for minority students. Some OMWI officials identified practices targeted to improve organizationwide diversity, which could eventually help build management-level diversity. These included targeted recruitment to attract minorities and women, training for hiring managers and other employees on diversity hiring practices, and expanded internship programs as a way to hire a greater number of female and minority interns. Targeted recruitment. All agencies and Reserve Banks with whom we spoke had participated in career fairs or partnerships with minority- serving organizations, as outlined in section 342 of the Dodd-Frank Act, to target diversity recruitment, and in several cases bolster recruitment of particular populations, such as Hispanics. The OMWIs at FDIC, FHFA, and SEC work with the agencies’ hiring and recruitment staff to identify strategies for recruiting diverse candidates. Additionally, the Federal Reserve Board OMWI reported that including hiring managers at diversity career fairs had made their targeted recruitment activities more effective. Training for hiring managers. Some OMWIs reported they implemented practices to educate supervisors and hiring managers on diversity hiring practices. For example, the Reserve Bank of New York designed a training course to enhance cross-cultural interviewing skills of recruitment staff. OCC also provides diversity recruitment training to the agency’s recruitment staff, and CFPB planned to provide its hiring managers a toolkit with tips on diversity hiring practices. Internship programs. Many agencies and Reserve Banks implemented internship programs to build employment diversity by developing a more diverse pipeline of potential entry-level candidates. For example, the Reserve Bank of San Francisco reported that it expanded its internship program to support more interns and leveraged partnerships with organizations representing minorities and women to increase the diversity of the bank’s internship program applicant pool. In response to section 342 of the Dodd-Frank Act, seven federal financial agencies have taken preliminary steps to respond to the requirement to develop standards for assessing the diversity policies and practices of entities they oversee. While these agencies have made initial progress, it is too soon to evaluate how effectively the agencies are responding to this requirement. The affected agencies include CFPB, FDIC, FHFA, the Federal Reserve Board, NCUA, OCC, and SEC. In addition to this requirement under the Dodd-Frank Act, FHFA is also subject to the Housing and Economic Recovery Act of 2008 (HERA), under which it must assess its regulated entities’ diversity activities and meet other provisions similar to those in section 342. In 2010, FHFA developed an agency regulation implementing HERA requirements, in part, to ensure that diversity is a component of all aspects of its regulated entities’ business activities. The agency’s regulated entities include Fannie Mae, Freddie Mac, Federal Home Loan Banks, and the Federal Home Loan Bank System’s Office of Finance. HERA requires the agency’s regulated entities to develop diversity policies and procedures, staff an OMWI, and report annually to FHFA on their OMWI activities, among other requirements. In addition, FHFA has enforcement authority under HERA and FHFA’s promulgated regulation to ensure its regulated entities have diversity standards in place. According to FHFA OMWI officials, the agency’s response to HERA also satisfies the section 342 requirement. According to OMWI officials, other agencies reviewed FHFA’s regulation as a possible option for responding to the section 342 requirement; however, the enforcement authority included in FHFA’s regulation is unique to the agency. They said that under the Dodd-Frank Act their agencies do not have enforcement authority to require regulated entities to implement diversity standards and practices. affected agencies also told us their OMWIs collaborated on initial steps to determine how to respond to these requirements by meeting periodically as a group, meeting with members of Congress, and performing outreach to industry participants and advocacy groups. Pub. L. No. 111-203. § 342(b)(4) (2010). Even though section 342 provides for the development of standards for the assessment of diversity policies and practices of regulated entities, it further provides that nothing in the requirement may be construed to require any specific action based on the findings of the assessment. understand industry views on developing standards for assessing diversity policies and practices. One OMWI reported that industry representatives discussed options for evaluating diversity with respect to a regulated entity’s size, complexity, and market area. OMWI officials told us responding to the requirement was a challenge for several reasons. Specifically, differences across regulated entities in terms of size, complexity, and market area made it challenging to develop a uniform standard. Determining the process and format for developing standards was also a challenge. OMWI officials also said they want to minimize adding a new regulatory burden to meet this provision. Therefore, the agencies would like to leverage existing information sources—data that regulated entities already provide—in evaluating the diversity activities of regulated entities. For example, to find ways to avoid duplicating existing data-collection efforts, CFPB and NCUA were working with EEOC for access to EEO-1 data for regulated entities. OCC officials said OCC had also considered using EEO-1 data, but some regulated entities had concerns about maintaining proprietary information, given the potential for Freedom of Information Act requests. In addition to establishing an OMWI, the act required federal financial agencies and Reserve Banks to report annually on their diversity practices, and nearly all of the agencies and all the Reserve Banks have begun reporting annually on their diversity practices. As discussed earlier, the act required each OMWI to submit to Congress an annual report on the actions taken pursuant to section 342, including information on the percentage of amounts paid to minority-and women-owned contractors and successes and challenges in recruiting and hiring qualified minority and women employees, and other information as the OMWI director determines appropriate. Including more information on the outcomes and progress of their diversity practices could enhance the usefulness of these annual reports. Seven of eight agencies and all Reserve Banks issued annual reports in 2011. CFPB, which was created in July 2010 and assumed responsibility for certain consumer financial protection functions in July 2011, issued an agencywide semiannual report for 2011. Its OMWI planned to issue an annual report for 2012 at the same time as the other agencies, in March 2013. In their 2011 Annual OMWI Reports to Congress, several agencies and Reserve Banks reported on efforts to measure outcomes and progress of various diversity practices, which provide examples of the types of outcomes and measures of progress that could be helpful for OMWIs to include in their annual reports. Although the act requires information on successes and challenges, it does not specifically require reporting on effectiveness; however, the act provides some leeway to the federal financial agencies and the Reserve Banks to include “any other information, findings, conclusions, and recommendations for legislative or agency action, as the Director determines appropriate.” Measurement of diversity practices is one of the nine leading diversity management practices we previously identified. We have reported that quantitative measures—such as tracking employment demographic statistics—and qualitative measures—such as evaluating employee feedback survey results—could help organizations translate their diversity aspirations into tangible practice. The Federal Reserve Board reported that it tracks job applicant information to assess the diversity of applicant pools, candidates interviewed, and employees hired as a result of diversity recruiting efforts, and FDIC reported that it monitors participation and attrition rates and diversity characteristics of participants in a development program. SEC reported plans to develop standards for assessing its ongoing diversity and inclusion efforts and include them in a strategic plan. The Reserve Banks of Chicago, Philadelphia, Richmond, and San Francisco reported on the number of internships each bank supported and the ethnic and gender diversity of the interns. The Reserve Bank of Chicago also reported on the number of job offers extended and candidates hired from its internship program, as well as on the number of candidates successfully hired from a diversity career expo. Further, the Reserve Bank of Cleveland identified reporting tools developed to monitor the bank’s inclusion in contracting efforts. In addition to using these measures, some OMWI officials said they used annual employee surveys as a measurement tool to gather information about the progress of their diversity practices, including retention practices. For example, FDIC’s annual employee survey includes specific questions related to diversity, and the agency uses responses to assess the effectiveness of policies and programs and outline action steps for improvement. OCC officials told us the government-wide federal employee viewpoint survey provided information on employee perspectives about diversity, and the agency measured its results against government-wide scores. Further, OMWI officials from the Reserve Bank of Minneapolis said exit surveys and employee declination surveys provided additional information for evaluating their retention and recruiting programs. Federal financial agencies and Reserve Banks have focused their initial OMWI efforts on implementing section 342 of the Dodd-Frank Act. While many OMWIs have implemented or are planning efforts to measure and evaluate the progress of their diversity and inclusion activities, which is consistent with the leading diversity management practices, information on such efforts is not yet reported consistently across the OMWI annual reports. According to OMWI officials as well as industry representatives we interviewed, measuring the progress of diversity recruitment and retention practices is a challenging, long-term process. For example, NCUA officials told us measuring the progress of certain recruiting practices could be a challenge, as access to demographic information about job applicants might be limited. Additionally, FHFA officials told us that while measuring the progress of diversity practices was needed to identify best practices, such measurement needs to be efficient and meaningful. However, without knowledge of OMWI efforts to measure outcomes and the progress of their diversity practices, Congress lacks information that would help hold OMWIs accountable for achieving desired outcomes. In addition, increased attention to evaluation and measurement through annual reporting of these efforts could help the OMWIs improve management of their diversity practices. Reporting such information would provide an opportunity for the agencies and Reserve Banks to learn from others’ efforts to measure their progress and indicate areas for improvement. Section 342 of the Dodd-Frank Act requires federal financial agencies and Reserve Banks to develop procedures to ensure, to the maximum extent possible, the fair inclusion and utilization of women and minorities in contracting. Specifically, the act requires agency and Reserve Bank actions to ensure that its contractors are making efforts to include women and minorities in their workforce. Also, the act has requirements for actions to increase contracting opportunities for minority- and women- owned businesses (MWOB). Most agencies and Reserve Banks have developed and included a provision in contracts for services requiring their contractors to make efforts to ensure the fair inclusion of women and minorities in their workforce and subcontracted workforces. The extent to which these agencies and Reserve Banks have contracted with MWOBs varied widely. These entities reported multiple challenges to increasing contracting opportunities for MWOBs and used various technical assistance practices to address these challenges. To address the act’s requirement to ensure the fair inclusion of women and minorities, to the maximum extent possible, in contracted workforces, agencies either have developed or are in the process of developing fair inclusion provisions in their contracts for services, and all Reserve Banks have done so. In addition, some agencies and all Reserve Banks have developed procedures to assess contractors’ efforts for workforce inclusion of women and minorities. Five agencies—FDIC, FHFA, NCUA, OCC, and the Federal Reserve Board—and all Reserve Banks have created a fair inclusion provision and are using it in contracts for services. Section 342 of the Dodd-Frank Act requires agencies and Reserve Banks to develop procedures for review and evaluation of contract proposals for services and for hiring service providers that include a written statement that the contractor, and as applicable subcontractors, shall ensure, to the maximum extent possible, the fair inclusion of women and minorities in the workforce of the contractor and, as applicable, subcontractors. The act does not specify the elements to be included in the written statement and provides that each OMWI director prescribe the form and content of the statement. CFPB, SEC, and Treasury are each in the process of developing a fair inclusion provision. CFPB is developing procurement procedures to address the requirements of the act and required more time because its OMWI office was established in January 2012. SEC is subject to the Federal Acquisition Regulation (FAR) and is currently developing its inclusive contract provision. While CFPB and SEC develop inclusion statements pursuant to the act, both agencies have been using the equal employment opportunity statement contained in the FAR in executed contracts. Treasury has developed its fair inclusion provision to add to future contracts. It has issued a notice of proposed rulemaking in the Federal Register for public comments on this change to its contracting procedures as required under the law. The comment period ended on October 22, 2012. Treasury received eight comments which included, among other things, suggestions to make the fair inclusion provision applicable to all contracts regardless of the dollar amount of the contract and to better specify the documentation required of contractors to demonstrate that they have met the requirements of the fair inclusion provision. Treasury is currently reviewing the public comments and considering changes to the proposed rule. The fair inclusion provisions we reviewed contained the following: Equal employment opportunity statement: Fair inclusion provisions include a commitment by the contractor to equal opportunity in employment and contracting and, to the maximum extent possible consistent with applicable law, the fair inclusion of women and minorities in the contractor’s workforce. Documentation: To enforce the fair inclusion provision, agencies require contractors to provide documentation of their efforts to include women and minorities in the contractor’s workforce, such as a written affirmative action plan; documentation of the number of employees by race, ethnicity, and gender; information on subcontract awards, including whether the subcontractor is an MWOB; and any other actions describing the contractor’s efforts toward the inclusion of women and minorities. Contract amount threshold: Agencies apply the fair inclusion provision to contracts exceeding a certain dollar amount. For two agencies subject to the act, this threshold is any amount over $150,000. For three agencies subject to the act, this threshold is any amount over $100,000. The Reserve Bank fair inclusion provisions we reviewed did not generally include a dollar-amount threshold. None of the officials from five agencies that have implemented a fair inclusion provision required by the act described to us receiving an adverse reaction from contractors, but officials from a majority of the Reserve Banks we spoke with described resistance or concerns from some contractors. OCC stated that smaller businesses had expressed confusion about the requirement because the businesses are too small to report workforce demographics to EEOC. Eight Reserve Banks described contractors expressing some disagreement or concern at the inclusion of the language in contracts. According to some Reserve Bank officials, contractors were concerned that accepting the fair inclusion provision would trigger other federal requirements for their businesses, or subject the contractor to meeting hiring or subcontracting targets. Some Reserve Banks described explaining the limited scope of the provision to concerned contractors. Other Reserve Banks described modifying the language in the fair inclusion provision, for example, in one case, changing a phrase regarding the contractor’s efforts to include women and minorities from “to the maximum extent possible” to read “to the maximum extent required by law.” Other Reserve Banks described occurrences where, in response to a contractor’s concern, they excluded the fair inclusion language from contracts for a procurement with a small dollar amount or because the vendor provided a service critical to the Reserve Bank and alternate vendors were not available. Finally, one Reserve Bank described declining a contract and seeking an alternate vendor that accepted the provision. Some agencies and all Reserve Banks have developed procedures to assess contractors’ efforts toward workforce inclusion of women and minorities. Section 342 of the Dodd-Frank Act requires the 8 federal financial agencies in the act and 12 Reserve Banks to develop procedures to determine whether a contractor and, as applicable, a subcontractor, has failed to make a good faith effort to include minorities and women in their workforces. Good faith efforts include any actions intended to identify and remove barriers to employment or to expand employment opportunities for minorities and women in the workplace, according to the policies some agencies have developed. For example, recruiting minorities and women or providing these groups job training may be considered good faith efforts for diversity inclusion. Contractors must certify that they have made a good faith effort to include women and minorities in their workforces, according to most policies we reviewed. At the same time, contractors may provide documentation of their inclusion efforts such as workforce demographics, subcontract recipients, and the contractor’s plan to ensure that women and minorities have opportunities to enter and advance within its workforce. Agencies and Reserve Banks plan to conduct a review of each contractor’s certifications and documentation annually, once in a 2-year period, or at other times deemed necessary, such as when contracts are executed or renewed, to make a determination of whether the contractor made a good faith effort to include women and minorities in its workforce. Failure to make a good faith effort may result in termination of the contract, referral to the Office of Federal Contract Compliance Programs, or other appropriate action.Four agencies and all Reserve Banks have established good faith effort determination procedures, and four agencies have yet to implement such procedures. In 2011, the proportion of a federal financial agency’s contracting dollars awarded to businesses owned by minorities or women varied, ranging between 12 percent and 38 percent according to the OMWI reports of the Seven federal financial agencies awarded a total agencies (see fig. 14).of about $2.4 billion for contracting for external goods and services in fiscal year 2011, with FDIC awarding about $1.4 billion of this amount. Similarly, according to Reserve Bank OMWI reports, Reserve Bank contracting dollars paid to businesses owned by minorities or women ranged between 3 percent and 24 percent in 2011 (see fig. 15). Reserve Banks paid about $897 million in fiscal year 2011 in contracting. Among federal financial agencies, OCC awarded the largest proportional amount of contracting dollars to MWOBs—about 38 percent (almost $67 million). OCC officials told us that its contract needs tend to be for services for which there is often a pool of MWOB suppliers and most of OCC’s 2011 contract dollars were spent on computer related services. The Federal Reserve Board awarded the smallest proportion of its contracting dollars to MWOBs, with about 12 percent going to such businesses. According to the Federal Reserve Board, a significant amount of its procurement is for economic data, which are generally not available from MWOBs. Although federal agencies are not generally required to report on MWOBs, most are required to report on certain small business contracting goals, including goals for women and small disadvantaged businesses (which include minority-owned businesses). In a 2012 report, we found that 35 percent of funds all federal agencies obligated to small businesses in 2011 were obligated to minority-owned small businesses and 17 percent were obligated to women-owned businesses. Among Reserve Banks, the Reserve Bank of Minneapolis paid the largest proportion of its contracting dollars to MWOBs with about 24 percent going to such businesses (18.5 percent to minority-owned businesses and about 5 percent to women-owned businesses). According to the Reserve Bank of Minneapolis, almost half of its MWOB contract dollars were paid for software and related technology integration services from minority-owned firms. All other Reserve Banks paid under 13 percent of contracting dollars to MWOBs, with the Reserve Bank of New York awarding the smallest percentage of its contracting dollars to such businesses (3 percent). The Reserve Bank of New York described its commitment to increasing diversity in its pool of potential contractors through its outreach efforts to us and in its 2011 OMWI report. For example, the Reserve Bank of New York held an event with its primary contractors and small firms to identify potential partnerships and an event that provided small firms consultation on business plans and credit applications to increase the capacity of the small firms. Seven federal financial agencies included in this report and all 12 Reserve Banks identified challenges in increasing contracting opportunities for MWOBs. Section 342 of the Dodd-Frank Act requires federal financial agencies and Reserve Banks to include in their annual OMWI report a description of the challenges they may face in contracting with qualified MWOBs. As a new agency, CFPB has not been required to complete an annual OMWI report and did not identify any contracting challenges to us. In interviews with us and in the 2011 OMWI reports to Congress, the remaining agencies and all Reserve Banks discussed a number of common challenges to increasing contracting with MWOBs, including the following: Limited capacity of MWOBs: Some agencies and Reserve Banks stated that reporting or other requirements under federal contracts were often too great a burden for MWOBs or that MWOBs needed to build capacity to meet federal contracting requirements. Some agencies and Reserve Banks also stated that at times the need for goods or services is not scaled to the capacity of MWOBs. For example, some agencies and Reserve Banks faced challenges identifying MWOBs that can meet procurement needs on a national scale. Developing staff or procedures to meet contracting requirements of the act: According to some agencies, new OMWIs require additional staff or staff development, or procedures to meet the requirements of the act, including providing technical assistance to increase opportunities for MWOBs, identifying qualified MWOBs in the marketplace, and incorporating the use of a fair inclusion provision in contracts and good faith effort determination processes, which we discussed earlier, into established procurement processes. MWOB classification challenges: Multiple agencies and Reserve Banks described difficulty identifying and classifying suppliers as diverse entities. Some Reserve Banks noted that no central agency is responsible for certifying MWOBs. Some agencies and Reserve Banks also discussed a need for new procedures or information systems to identify and classify diverse ownership of businesses. Availability: Some agencies and Reserve Banks noted that specialized services are often only available from a limited pool of suppliers that may not include MWOBs. Centralized procurement: Reserve Banks may use the National Procurement Office (NPO), the centralized procurement office for the 12 Reserve Banks, to contract for some goods and services. When a Reserve Bank procures through the NPO, access to MWOBs may be limited because the NPO procures for volume discounts with larger contractors. However, the Reserve Bank of Richmond, in its 2011 OMWI report, described efforts to work with existing large contractors to increase subcontracting with smaller, diverse firms. No MWOB bids: In some cases, agencies and Reserve Banks found that potentially eligible MWOB applicants decided not to bid without explanation. Other challenges were described on a limited basis by one agency or Reserve Bank. For example, NCUA explained that MWOBs are not familiar with the agency. According to NCUA, to address this issue it increased its outreach budget and attendance to MWOB events and published an online guide on doing business with the agency. According to FDIC, in some cases MWOBs do not have relationships with large federal contractors for subcontracting opportunities. To address this problem, FDIC emphasizes to larger firms the importance of subcontracting with MWOBs and has negotiated increases in MWOB subcontracting participation with large contractors. FDIC participated in procurement events where small and large contractors could meet and match capabilities. The Reserve Bank of Chicago stated that MWOBs have a hard time standing out in highly competitive industries, such as staff augmentation services. Finally, according to the Reserve Bank of Richmond, MWOBs may have incorrect perceptions that Reserve Banks are subject to federal procurement rules that they cannot meet. To counter challenges MWOBs may face in accessing federal contracting opportunities, all agencies and Reserve Banks described providing various specific forms of technical assistance to MWOBs, which they described in discussions with us and in 2011 OMWI reports to Congress. No agency or Reserve Bank stood out as coordinating technical assistance better than others, although some agencies pointed to longstanding efforts at FDIC to provide technical assistance to MWOBs as model practices. Section 342 of the Dodd-Frank Act requires federal financial agencies and Reserve Banks to develop standards for coordinating technical assistance to MWOBs. These activities included developing and distributing literature, such as manuals and brochures describing contracting procedures and resources to prospective contractors. Most agencies also established websites that function as informational portals on doing business with agencies and act as an agency entry point to prospective contractors. Agencies and Reserve Banks described outreach activities to MWOBs, including conducting expert panels, hosting meetings and workshops, and exhibiting at trade shows and procurement events. Some of these outreach activities have been coordinated with SBA. For example, FDIC has partnered with SBA to develop a technical assistance program for small businesses, including MWOBs, on money management. OCC worked with SBA to create a technical assistance workshop that they conducted in 2012 with women- owned small businesses. Some agencies have included SBA representatives in supplier diversity events they sponsor. Even prior to the passage of the Dodd-Frank Act, the Federal Reserve Board had participated in SBA procurement fairs and used SBA information and events to market its procurement opportunities among diverse suppliers. Treasury has participated in SBA outreach events and created a mentor- protégé program to assist small businesses with contracting opportunities. Agencies and Reserve Banks also provide one-on-one technical assistance, which is intended to meet the specific needs of a prospective MWOB contractor. According to Treasury, they coordinate with SBA to leverage SBA’s knowledge of one-on-one technical assistance practices with MWOBs. FHFA and SEC have created dedicated e-mail addresses and telephone lines for MWOBs to reach their OMWIs, and SEC has established monthly vendor outreach days when MWOBs can speak one- on-one with SEC’s supplier diversity officer and small-business specialist. Some Reserve Banks described conducting one-on-one meetings with prospective contractors in 2011, some of which were held during procurement events. Finally, FDIC offered its database of MWOBs to the OMWIs and some agencies described using or planning to use it to identify potential contractors for outreach regarding procurement opportunities. According to FDIC, it sends an updated version of the database to the agencies each quarter. Across financial services firms, federal financial agencies, and Reserve Banks, available data showed the representation of minorities and women varied, and there was little overall change in workforce diversity from 2007 through 2011. Our findings suggest the overall diversity of the financial services industry has generally remained steady following the financial crisis. Since 2011, federal financial agencies and Reserve Banks have taken initial steps to respond to the Dodd-Frank Act’s requirements to promote workforce diversity, and OMWIs have begun reporting on both planned and existing diversity practices, in addition to reporting on workforce demographic statistics according to EEOC requirements. While many OMWIs have implemented or are planning efforts to measure and evaluate the progress of their diversity and inclusion activities, a leading diversity management practice, information on these efforts is not reported consistently across the OMWI annual reports. Although the act requires information on successes and challenges, it does not specifically require reporting on measurement; however, the act provides that the federal financial agencies and the Reserve Banks can include additional information determined appropriate by the OMWI director. Measurement of diversity practices is one of the nine leading diversity management practices we have previously identified. Reporting on these efforts as part of annual OMWI reporting would provide Congress, other OMWIs, and the financial services industry with potentially useful information on the ongoing implementation of diversity practices. Such information could be helpful industrywide, as management-level diversity at federal financial agencies, Reserve Banks, and the broader financial services industry continues to be largely unchanged. Without information on OMWI efforts to report outcomes and the progress of diversity and inclusion practices, Congress lacks information that would help hold agencies accountable for achieving desired outcomes or whether OMWI efforts are having any impact. To enhance the availability of information on the progress and impact of agency and Reserve Bank diversity practices, we are recommending to CFPB, FDIC, the Federal Reserve Board, FHFA, NCUA, OCC, SEC, Treasury, and the Reserve Banks that each OMWI report on efforts to measure the progress of its employment diversity and inclusion practices, including measurement outcomes as appropriate, to indicate areas for improvement as part of their annual reports to Congress. We provided drafts of this report to CFPB, the Federal Reserve Board, FDIC, FHFA, NCUA, OCC, SEC, Treasury, and each of the Federal Reserve Banks for review and comment. We received written comments from each of the agencies and a consolidated letter from all of the Reserve Banks. Their comment letters are reproduced in appendixes V through XIII. The agencies and Reserve Banks generally agreed with our recommendation. CFPB, Federal Reserve Banks, FDIC, FHFA, NCUA, OCC, and SEC provided technical comments, which we incorporated as appropriate. We also provided a draft of the report to EEOC for comment. EEOC is not subject to the requirements of section 342 of the act but did provide technical comments, which we incorporated as appropriate. With respect to our recommendation that each OMWI report on efforts to measure the progress of its employment diversity and inclusion practices, including measurement outcomes as appropriate, to indicate areas for improvement as part of their annual reports to Congress, all the federal financial agencies and Reserve Banks indicated that they plan to implement the recommendation: the OMWI Director of CFPB explained that its OMWI was the newest of such offices because the agency was created with the enactment of the Dodd-Frank Act and that it planned to include measurement information in future reports; the OMWI Director of the Federal Reserve Board stated that the recommendation was consistent with its ongoing practices and that it would look for additional ways to report on diversity practices; FDIC’s OMWI Director agreed with the recommendation and stated that it will include efforts to measure the progress of its diversity practices in its annual reports to Congress; the Acting Associate Director of FHFA’s OMWI stated that it would include measurement information in its 2013 OMWI report to Congress; the Executive Director of NCUA said the agency will work toward reporting on its efforts to measure the progress of workforce diversity and practices; the Comptroller of the Currency stated that OCC had a well- developed diversity and inclusion program through which the agency measures its progress and that OCC has included additional metrics in its 2013 OMWI report to Congress; SEC’s OMWI Director noted that the agency plans to incorporate measurement information on its diversity and inclusion practices in its future OMWI reports to Congress; Treasury’s OMWI Director agreed with our recommendation and stated that it was consistent with the agency’s efforts to use more than demographic representation to measure the progress of diversity and inclusion efforts; and the Federal Reserve Banks’ OMWI directors noted that the banks currently include some measurement information in annual reports and said that they will consider additional ways to measure and report on Reserve Banks’ diversity practices. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees; the Chairman, Board of Governors of the Federal Reserve; Director, Bureau of Consumer Financial Protection, commonly known as the Consumer Financial Protection Bureau; Chair, Equal Employment Opportunity Commission; Chairman, Federal Deposit Insurance Corporation; Acting Director, Federal Housing Finance Agency; Chairman, National Credit Union Association; Comptroller, Office of the Comptroller of the Currency; Chairman, Securities and Exchange Commission; Secretary, Department of the Treasury; and to the Directors of the Offices of Minority and Women’s Inclusion for the Federal Reserve Banks; and other interested parties. We will make copies available to others upon request. The report will also be available at no charge on our website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or garciadiazd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are listed on the last page of this report. GAO staff who made major contributions to this report are listed in appendix XIV. The objectives for this report were to examine (1) what available data show about how the diversity of the financial services industry workforce and how diversity practices taken by the industry have changed from 2007 through 2011; (2) what available data show about how diversity in the workforces of the federal financial agencies and the Federal Reserve Banks (Reserve Banks) has changed from 2007 through 2011; (3) how these federal financial agencies and Reserve Banks are implementing workforce diversity practices under section 342 of the Dodd-Frank Act, including the extent to which their workforce diversity practices have changed since the financial crisis; and (4) the status of federal financial agencies’ and Reserve Banks’ implementation of the contracting provisions of the Dodd-Frank Act related to the inclusion of women and minorities. To describe how diversity in the financial services industry has changed since the beginning of the 2007-2009 financial crisis, we analyzed 2007- 2011 workforce data from the Equal Employment Opportunity Commission’s (EEOC) Employer Information Report (EEO-1). EEO-1 is data annually submitted to EEOC generally by private-sector firms with more than 100 employees. We obtained EEO-1 data on October 2012, from the finance and insurance industry categorized under the North American Industry Classification System (NAICS) code 52 for these industries from 2007 through 2011. EEO-1 data were specifically obtained from the EEOC’s “officials and managers” category by gender, race/ethnicity, firm size, and industry sectors. The EEO-1 “officials and managers” category was further divided into two management-level categories of first- and mid-level managers and senior-level managers and then analyzed by gender, race/ethnicity, and firm size.understand the potential internal candidate pools available for management positions in the financial industry, we obtained EEO-1 data under NAICS code 52 for all positions, including nonmanagement positions, by gender and race/ethnicity. To determine the reliability of the EEO-1 data that we received from EEOC, we interviewed knowledgeable EEOC officials and reviewed relevant documents provided by agency officials and obtained on its website. We also conducted electronic testing of the data. We determined that the EEO-1 data were sufficiently reliable for our purposes. We used monthly averages over 3 months—July, August, and September—from the Basic Monthly CPS for each year and then calculated the estimated percentages, as EEOC’s EEO-1 reports are collected over this period of time every year. obtained from a publicly accessible federal statistical database, we gathered and reviewed relevant documentation from the Bureau of the Census website, conducted electronic testing, and determined the standard errors of the CPS estimates. We determined that the CPS data were sufficiently reliable for our purposes. To gather information on a potential external pipeline of diverse candidates for management positions in the financial industry, we obtained demographic data on minority and female students enrolled in undergraduate, Master of Business Administration (MBA), and doctoral degree programs from 2007 through 2011 from the Association to Advance Collegiate Schools of Business (AACSB). We focused on MBA programs as a source of potential future managers and senior executives. Financial services firms compete for minorities in this pool with one another and with firms from other industries. We combined this information with undergraduate and doctoral degree programs to provide information on the overall diversity of the university system. AACSB conducts an annual voluntary survey called “Business School Questionnaire” of all its member schools. In 2011, AACSB updated its survey to include two additional race/ethnicity categories to include “two or more races” and “Native Hawaiian or Other Pacific Islander.” For consistency purposes, we combined these two additional categories along with the representation of Native Americans into an “other” category. To determine the reliability of the AACSB data, we interviewed a knowledgeable AACSB official and reviewed relevant documents provided by the official and obtained on its website. We determined that the data from AACSB were sufficiently reliable for our purposes. To determine how diversity practices in the financial services industry have changed since the beginning of the financial crisis, we conducted a literature review of relevant studies that discussed diversity best practices within the financial services industry from 2007 through 2011. In addition, we interviewed 10 selected industry representatives to determine whether the nine leading diversity practices we previously identified are relevant today and how diversity practices changed since 2007. We also reviewed documents produced by these industry representatives. These representatives were selected based on their participation in our previous work, suggestions from federal agencies we interviewed for this report, as well as the type of industry representative—such as an industry association or private firm. To describe diversity in the workforces of the federal financial agencies and Reserve Banks, we analyzed data we received from agencies and banks. To review changes in the representation of minorities and women in the workforces of federal financial agencies, we obtained from the agencies annual Equal Employment Opportunity Program Status Reports from 2007 through 2011, required under U.S. EEOC Management Directive 715 and known as MD-715 reports.seven of the eight federal agencies required to meet the workforce diversity provisions in section 342 of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act). These included the Departmental Offices of the Department of the Treasury, the Federal Deposit Insurance Corporation, the Federal Housing Finance Agency (FHFA), the Board of Governors of the Federal Reserve System, the National Credit Union Administration, the Office of the Comptroller of the Currency, and the Securities and Exchange Commission. The Bureau of We obtained data from Consumer Financial Protection, commonly known as the Consumer Financial Protection Bureau (CFPB), was created in July 2010 and assumed responsibility for certain consumer financial protection functions in 2011; workforce diversity data for the agency to show trends from 2007 through 2011 were unavailable. Additionally, our trend analysis excluded FHFA, as the agency was created in 2008 and did not report on diversity employment statistics for 2007, 2008, or 2009. Further, our senior management-level trend analysis excluded SEC, as the agency revised how it reported officials and managers during the 5-year period. To review changes in the representation of minorities and women in the workforces of Reserve Banks, we obtained from banks their annual EEO-1 reports from 2007 through 2011. For agencies and Reserve Banks, we reviewed workplace employment data by occupational categories, distributed by race/ethnicity and gender. In our analyses, we considered all categories other than white as race/ethnic minorities and analyzed trends in diversity at both the senior management-level and agency- and bankwide.analyzed senior management-level and overall diversity trends across all agencies and all Reserve Banks, as well as diversity trends for each agency when trend information was available. To assess the reliability of MD-715 and EEO-1 data we received from agencies and Reserve Banks, we interviewed EEOC officials on both types of data as well as agency officials on MD-715 data and Reserve Bank officials on EEO-1 data about how the data are collected and verified as well as to identify potential data limitations. We found that while agencies and banks rely on employees to provide their race and ethnicity information, agencies and banks had measures in place to verify and correct missing or erroneous data prior to reporting them and officials with whom we spoke generally agreed these data were generally accurate. Based on our analysis, we concluded that the MD-715 and EEO-1 data were sufficiently reliable for our purposes. To assess how federal financial agencies and Reserve Banks are implementing workforce diversity practices under section 342 of the Dodd-Frank Act, we reviewed agency and bank documentation of efforts to respond to the act’s requirements. Sources included annual Office of Minority and Women Inclusion (OMWI) reports to Congress by agencies and banks, annual agency MD-715 reports, and other documentation provided to us by agency and bank OMWI officials. Additionally, we gathered testimonial information from agency and Reserve Bank OMWI officials on changes in the inclusion of women and minorities in their workforces and any changes in the practices used to further workforce diversity goals. Through our review of agency and Reserve Bank documentation and interviews with OMWI officials, we assessed agency and Reserve Bank efforts to measure and report on the progress of their diversity practices, as measurement was one of the nine leading diversity practices we previously identified. To determine the extent to which agencies and Reserve Banks are implementing the requirements of the Dodd-Frank Act regarding the inclusion of women and minorities in contracting, we reviewed 2011 OMWI reports submitted to Congress and interviewed officials on their efforts in this area. We also reviewed OMWI reports to determine the dollar amount and percentage of total contracts federal financial agencies reported awarding to minority- and women-owned businesses (MWOB), and the dollar amount and percentage of total contracts Reserve Banks reporting paying MWOBs in 2011. We verified these figures and our presentation of the information with each agency and Reserve Bank, and we determined that these data were sufficiently reliable for our purposes. We interviewed agency officials on their efforts to coordinate with the Small Business Administration and other federal agencies to provide technical assistance to minority- and women-owned businesses. We collected and reviewed agency documentation of procedures developed to address the act’s requirements, such as policy manuals, process workflows, and technical assistance materials. We also collected and reviewed examples of fair inclusion provisions used in agency and Reserve Bank contracts as required in section 342 of the Dodd-Frank Act. We conducted this performance audit from January 2012 to March 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides additional detailed analysis of EEOC data on the financial services industry by workforce position and industry sector from 2007 through 2011. The representation of minorities by gender was below 45 percent across all the positions throughout the same 5-year period (see fig. 16). For example, in sales positions, the representation of minorities was higher among women (about 31 percent) than among men (about 17 percent). Similarly, at the professional level, the representation of minority women was about 27 percent, compared to about 23 percent for minority men. Diversity remained about the same across all industry sectors in terms of both the representation of women and minorities. From 2007 through 2011, the representation of women decreased slightly in most industry sectors and remained below 50 percent in all sectors (see fig. 17). The “insurance carriers and related activities” sector was the only sector that showed an increase in the representation of women, from 47.7 percent to 48.2 percent. In contrast, the representation of minorities increased across all sectors. Specifically, from 2007 through 2011 the representation of minorities in the “monetary authorities-central bank” sector increased from 17 percent to 19.8 percent, and the “funds, trusts, and other financial vehicle” sector increased from 16 percent to 18.5 percent. This appendix provides information accompanying our review of changes in overall workforce diversity at federal financial agencies and the 12 Reserve Banks from 2007 through 2011.appendix IV provide data supporting the figures in this appendix. Tables 11 through 14 in According to MD-715 data, the representation of minorities in the overall workforce of the agencies, in aggregate, changed little from 2007 through 2011. Percentage point changes in the representation of minorities at FDIC, the Federal Reserve Board, NCUA, OCC, SEC, and Treasury varied from a 5 percentage point decrease at Treasury to a 3 percentage point increase at NCUA. In 2011, the representation of minorities in the overall workforce of the agencies and FHFA ranged from 25 percent at NCUA to 44 percent at the Federal Reserve Board. Similarly, we found that the representation of women in the overall workforce of the agencies did not change significantly from 2007 through 2011. Percentage point changes in the representation of women at the agencies from 2007 through 2011 varied from a 2 percentage point decrease at FDIC, the Federal Reserve Board, and Treasury to no percentage point change at NCUA and SEC. In 2011, the representation of minorities in the overall workforce of the agencies and FHFA ranged from 42 percent at FDIC to 48 percent at SEC and Treasury. According to EEO-1 data provided by the Reserve Banks, the representation of minorities in the overall workforce of the Reserve Banks decreased somewhat from 2007 through 2011. The banks showed changes in the representation of minorities from 2007 through 2011, from an 8 percentage point decrease at the Reserve Bank of Philadelphia, to a 2 percentage point increase at the Reserve Banks of Minneapolis and New York. The Reserve Bank of Boston showed no percentage point change from 2007 through 2011. In 2011, the representation of minorities in the overall workforce of the Reserve Banks ranged from 16 percent at the Reserve Bank of Kansas City to 53 percent at the Reserve Bank of San Francisco. In addition, we found that from 2007 through 2011, the representation of women in the overall workforce of the Reserve Banks also declined slightly according to EEO-1 data provided by the Reserve Banks. The Reserve Banks showed decreases in the representation of women in the overall workforce from 1 percentage point at the Reserve Bank of New York to 7 percentage points at the Reserve Bank of Cleveland. The representation of women in the overall workforce in 2011 ranged from 40 percent at the Reserve Banks of Philadelphia and Richmond to 53 percent at the Reserve Bank of Minneapolis. We reviewed agency and Reserve Bank reports and found that since the financial crisis, senior management-level minority and gender diversity at the agencies and Reserve Banks has varied across individual entities. We also found the representation of minorities and women in the overall workforce of the agencies changed little from 2007 through 2011, while the representation of minorities and women in the overall workforce of the Reserve Banks declined slightly. The following tables provide data supporting the senior management-level and total workforce figures in this report. In addition to the individual named above, Kay Kuhlman, Assistant Director; Heather Chartier; Brendan Kretzschmar; Alma Laris; Ruben Montes de Oca; Cheryl Peterson; Jennifer Schwartz; Jena Sinkfield; Andrew Stavisky; and Julie Trinder made major contributions to this report.
As the U.S. workforce has become increasingly diverse, many private- and public-sector entities recognize the importance of recruiting and retaining minorities and women for management-level positions to improve their business. The 2007-2009 financial crisis has renewed questions about commitment within the financial services industry (e.g., banking and securities) to workforce diversity. The Dodd-Frank Act required that eight federal financial agencies and the Federal Reserve Banks implement provisions to support workforce and contractor diversity. GAO was asked to review trends and practices since the beginning of the financial crisis. This report examines (1) workforce diversity in the financial services industry, the federal financial agencies, and Reserve Banks, from 2007 through 2011 and (2) efforts of the agencies and Reserve Banks to implement workforce diversity practices under the Dodd-Frank Act, including contracting. GAO analyzed federal datasets and documents and interviewed industry representatives and officials from the federal financial agencies and Reserve Banks. Management-level representation of minorities and women in the financial services industry and among federal financial agencies and Federal Reserve Banks (Reserve Banks) has not changed substantially from 2007 through 2011. Industry representation of minorities in 2011 was higher in lower-level management positions--about 20 percent--compared to about 11 percent of senior-level manager positions. Industry representation of women at the overall management level remained at about 45 percent. Agency representation of minorities at the senior management level in 2011 ranged from 6 percent to 17 percent and from 0 percent to 44 percent at the Reserve Banks. Women's representation ranged from 31 to 47 percent at the agencies and from 15 to 58 percent at the Reserve Banks. Officials said the main challenge to improving diversity was identifying candidates, noting that minorities and women are often underrepresented in both internal and external candidate pools. In response to the requirements in the Dodd-Frank Wall Street and Consumer Protection Act (Dodd-Frank Act), in 2011 federal financial agencies and Reserve Banks began to report annually on the recruitment and retention of minorities and women and other diversity practices. They all have established Offices of Minority and Women Inclusion (OMWI) as required. Many agencies and Reserve Banks indicated they had recruited from minority-serving institutions and partnered with organizations focused on developing opportunities for minorities and women, and most described plans to expand these activities. Some used employee surveys or recruiting metrics to measure the progress of their initiatives, as suggested by leading diversity practices, but OMWIs are not required to include this type of information in the annual reports to Congress. Better reporting of measurement efforts will provide Congress, agency officials, and other stakeholders additional insights on the effectiveness of diversity practices and demonstrate how agencies and Reserve Banks are following a leading diversity practice. Most federal financial agencies and Reserve Banks are in the early stages of implementing the contracting requirements required under the act. For example, most now include a provision in contracts for services requiring contractors to make efforts to ensure the fair inclusion of women and minorities in their workforce and subcontracted workforce and have established ways to evaluate compliance. The proportion of an agency's dollars awarded or a Reserve Bank's dollars paid to minority- or woman-owned businesses reported in 2011 OMWI reports ranged between 3 percent and 38 percent. Each agency and Reserve Bank should include in its annual OMWI report to Congress efforts to measure the progress of its diversity practices. The agencies and Reserve Banks agreed to include this information in the annual OMWI reports. Additionally, some agencies and the Reserve Banks described steps they have taken or plan to take to address the recommendation.
The communications sector’s infrastructure is a complex system of systems that incorporates multiple technologies and services. The infrastructure includes wireline, wireless, satellite, cable, and broadcasting capabilities, and includes the transport networks that support the Internet and other key information systems. Historically, networks based on time-division multiplexed (TDM) circuit-switches running on copper loops provided voice service for consumers. In a 2015 report and order, FCC noted that for over 100 years customers could rely upon telecommunications carriers for backup power for their residential landline phones during power outages because power is provided over traditional copper telephone lines. In other words, telephones served by copper networks continue to work during commercial power outages as long as the telephones do not need to be plugged into an electrical outlet to function. On the other hand, the physical infrastructure for IP-based networks, such as fiber and co-axial cable, does not carry power, which means telephones connected to IP networks may not work during commercial power outages (see fig.1). According to FCC, networks other than copper and services not based on TDM may not support data-based services such as credit card readers, home alarms, and medical alert monitors. The Alarm Industry Communications Committee noted in comments filed with FCC that the traditional TDM-based telephone service meets the standards necessary for fire protection and other life and safety applications, such as line seizure, the detection of a loss in communications path, and the proper encoding and decoding of tone messages sent by the alarm panel. The committee stressed that as networks transition to IP-based networks, these traits must be preserved. FCC notes that there are a number of distinct but related kinds of technology transitions, including: (1) changes in network facilities and in particular retirement of copper facilities, and (2) changes that involve the discontinuance, impairment, or reduction of legacy services, irrespective of the network facility used to deliver those services. In the case of retiring copper facilities, the Communications Act of 1934, as amended (Communications Act), and FCC rules thereunder allow telecommunications carriers to transition to new facilities without needing FCC approval as long as the change of technology does not discontinue, reduce, or impair the services provided. FCC rules do require incumbent telecommunications carriers to give notice to interconnecting carriers of planned copper retirements, and new FCC rules require incumbent carriers to give notice to retail customers of such planned copper retirements when such retirements remove copper to the customers’ premises without consumer consent, along with particular consumer protection measures. Such consumer protections include explanations of how consumers may seek more information from carriers about the copper retirement process and its possible impact on consumers’ service, and links for the FCC’s consumer complaint portal. With respect to service discontinuance, under the Communications Act, telecommunications carriers must obtain FCC approval before they discontinue, reduce, or impair service to a community or part of a community. FCC regulations include procedures for carriers to discontinue, reduce, or impair service. The regulations state that to discontinue telecommunications service, carriers must notify customers of this intent and file an application with FCC. Once an application is received, FCC issues a public notice and considers these applications on a case-by-case basis and also accepts and reviews comments on proposed discontinuations, reductions, or impairments of telecommunications service. According to the order, FCC will normally authorize the discontinuance, reduction, or impairment of service unless it is shown that to do so would adversely affect the public convenience and necessity, with regard to which FCC considers, among other things, whether customers would be unable to receive service or a reasonable substitute from another carrier. FCC officials told us that there is no forcing action or requirement for telecommunications carriers to transition to IP by a certain date and that the technology transitions are organic processes without a single starting or stopping point. In an August 2015 order, FCC noted that recent data indicate 30 percent of all residential customers choose IP-based voice services from cable, fiber, and other carriers as alternatives to legacy voice services. Furthermore, an additional 44 percent of households were “wireless-only” meaning these households only have wireless telephones. The August 2015 order also states that overall, almost 75 percent of U.S. residential customers (approximately 88-million households) no longer receive telephone service over traditional copper facilities because they rely on IP-based voice services or wireless phone service. Both FCC and DHS play a role in regulating the transition to IP and ensuring public safety communications are not at risk. Pursuant to the Communications Act, FCC is charged with regulating interstate and international communications by radio, television, wire, satellite, and cable throughout the United States. FCC officials stated that FCC is to promote the reliability, resiliency, and availability of the nation’s communications networks at all times, including in times of emergency or natural disaster. Further, FCC has the authority to adopt, administer, and enforce rules related to communications reliability and security, 911, and emergency alerting. FCC’s regulations include requirements for certain telecommunications carriers to report on the reliability and security of communications infrastructures, specifically reporting on network outages. FCC also asks carriers to report voluntarily on the status of the restoration of communications in the event of a large scale disaster. DHS is the principal federal agency to lead, integrate, and coordinate the implementation of efforts to protect communications infrastructure. DHS’s role in critical infrastructure protection is established by law and policy. The Homeland Security Act of 2002, Homeland Security Presidential Directive 7, and the National Infrastructure Protection Plan establish an approach for protecting the nation’s critical infrastructure sectors— including communications—that focuses on the development of public private partnerships and establishment of a risk management framework. These policies establish critical infrastructure sectors, including the communications sector; assign agencies to each sector (sector-specific agencies), including DHS as the sector lead for the communications and information technology sectors; and encourage private sector involvement. Pursuant to Presidential Policy Directive 21, DHS is to coordinate the overall federal effort to promote the security and resilience of the nation’s critical infrastructure from all-hazards. As the nation’s telecommunications systems transition to IP networks, carriers can face challenges during times of crisis that affect end users’ ability to call 911 and receive emergency communications. These challenges include (1) preserving consumer service and (2) supporting existing emergency communications services and equipment. FCC, DHS, and other stakeholders have taken steps to help address these challenges, but some persist. Providers face challenges in preserving service during times of crisis such as natural disasters or outages caused by malicious acts and accidents. For example, weather events, such as hurricanes and tornados, can damage telecommunications infrastructure and the power sources communications systems rely on to provide service. A 2012 DHS report entitled 2012 Risk Assessment Report for Communications identified risks to communication networks from violent weather that include fuel not being available for generators during a commercial power outage; aerial infrastructure unable to withstand high winds; utility poles unable to withstand high winds; and underground infrastructure unable to withstand flooding. Destruction of communications infrastructure by storms can affect both legacy copper wire and IP networks. For example, in talking with officials from New York and New Jersey about Hurricane Sandy, the officials told us the storm damaged both copper lines and fiber optic cable. However, as explained previously, in general, consumers with basic telephones and service provided over copper lines can still operate during a commercial power outage, as long as the carrier’s central office maintains power and keeps supplying line power through an all-copper network. In contrast, consumers with service provided over IP networks require a backup power source, such as a battery, since IP network infrastructure does not carry electrical power for the purpose of powering end devices, such as telephones. Officials we contacted from four state agencies, and representatives from four trade and industry organizations and consumer groups emphasized the importance of backup power for communications during emergencies. To address backup power requirements during a commercial power outage, FCC issued rules addressing 911 reliability and the reliability and continuity of communications networks for both carriers’ central office facilities and consumers’ homes. Specifically, in 2013, FCC issued new rules on central office backup power certification requirements for certain 911 service providers. In an August 2015 order, FCC noted that many consumers remained unaware they needed to take action to ensure their landline telephone service remained available in the event of a commercial power outage. FCC concluded that the transition to all-IP networks had the potential to create a widespread public safety issue if unaddressed. Therefore, FCC adopted rules to help ensure consumers have the information and tools necessary to maintain landline home telephone service during emergencies. When these rules become effective, FCC will require that telecommunications carriers communicate information to consumers regarding backup power, such as the availability of backup power sources, service limitations with and without backup power, and purchase options. FCC will also require telecommunications carriers to give consumers the option to purchase a backup power device with at least 8 hours of standby power during a commercial power outage enabling calls, including those to 911. Furthermore, FCC will require carriers to offer consumers the option to purchase 24 hours of backup power within 3 years. In addition to weather events, telecommunication network outages can occur through malicious acts, such as vandalism and cyber attacks, and by accidental cable cuts and software coding errors. For example, a fiber optic cable north of Phoenix was vandalized in February 2015, causing large-scale telephone and Internet outages across much of Northern Arizona. According to local officials we contacted, the outage lasted about a day and included Flagstaff, Sedona, Prescott, and surrounding areas potentially affecting more than 300,000 people. Officials told us that the Flagstaff police department’s 911 lines were down, so they sent staff to a backup site at the Arizona Department of Public Safety to answer calls; the police department also lost all Internet, a loss that prevented it from checking for warrants and driver’s licenses. Additionally, officials told us that some businesses closed because they could not process credit card transactions, that ATMs did not work, and that Northern Arizona University lost Internet service. According to a Flagstaff official, the telecommunications carrier is now building, and expects to complete by 2016, an additional fiber optic cable that will improve resiliency and redundancy. Cyber attacks can also challenge both IP networks and traditional legacy networks; however, DHS officials told us that IP networks are more prone to cyber attacks than legacy networks, because legacy networks are closed systems that are less vulnerable to cyber attacks. Under the terms of a 2013 executive order and a related presidential policy directive, it is the policy of the United States to strengthen the security and resilience of its critical infrastructure against both physical and cyber threats. In a 2015 report, the Communications Security, Reliability and Interoperability Council (CSRIC) identified cybersecurity threats to Voice over IP (VoIP) and voice services that include disrupting network availability, compromising confidentiality, and spoofing a caller’s identity. According to FCC officials, CSRIC is developing recommendations to support the real-time sharing of cyber threat information among private sector entities. For our recent products related to cybersecurity and information security, see related GAO products listed at the end of this report. As with legacy copper networks, accidents also cause IP network outages affecting communication capabilities. For example, a truck accident in 2014 took out 400 feet of aerial fiber optic cable along a rural road in Mendocino County, California. According to a local incident report, telephone, Internet, cellular, and 911 services went down for thousands of residents, and Internet service was out almost completely along a 40-mile corridor for approximately 45 hours. According to local officials we contacted, 911 services were unavailable, and the county sheriff estimated that 20 percent of county residents lost vital services. Alert notifications through phone calls were unavailable for residents waiting to receive evacuation notices just as a nearby wildfire was growing. According to an incident report, health care providers could not be reached; banks and supermarkets closed because they were unable to function without Internet, telephone, and ATM services; and electronic food stamp benefits were unavailable. IP network outages caused by human error, such as software coding errors, can affect large numbers of people over wide geographic areas. Such outages are sometimes referred to as “sunny day” outages. For example, in April 2014, a 911 call-routing facility in Colorado stopped directing emergency calls to 911 call centers in 7 states. The outage was caused by a coding error and resulted in a loss of 911 services for more than 11-million people for up to 6 hours. Unlike legacy copper networks, IP networks permit call control to be distributed among just a few large servers nationwide, meaning each server can serve millions, or even tens of millions, of customers, according to FCC. State officials from New York and California told us that IP networks allow for increased consolidation of equipment and facilities, which means that when an outage does occur, it can potentially last longer and affect more people across a wider area than legacy networks. An FCC investigation into a multistate 911 outage in 2014 found that this geographical consolidation of critical 911 capabilities may increase the risk of a large “sunny day” outage caused by software failures rather than disasters or weather conditions. According to this investigation, large-scale outages may result when IP networks do not include appropriate safeguards. In 2013, FCC adopted rules requiring 911 service providers to certify annually that they comply with industry-backed best practices or implement alternative measures that are reasonably sufficient to assure reliable 911 service. IP networks may not support existing communication services that key government officials and others rely on during times of crisis. Communications networks can become congested during emergencies, preventing government officials and other national security and emergency preparedness personnel from communicating with one another. To overcome this congestion, DHS maintains priority telecommunications services, such as the Government Emergency Telecommunications Service (GETS) that provide priority calling capabilities to authorized users. GETS was initially designed in the 1990s to operate with legacy networks during times of congestion. DHS officials told us that over the past 5 years similar priority features have been implemented in the core IP networks of select U.S. nationwide long- distance service providers. DHS officials told us congestion, caused by high-call volume and potentially as a result of cyber attack, will continue to be a challenge in an IP environment. FCC officials told us that although congestion may not be as likely in IP networks as it was in legacy networks, it will still occur. As shown in figure 2, numerous government officials and non-government organizations in critical positions rely on GETS when networks become congested during times of crisis. The value of priority telecommunications service when compared to regular network performance becomes apparent during times of crisis. For example, according to DHS, during Hurricane Sandy and the immediate aftermath, networks were congested due to damage and high call volume into and out of the storm-damaged area. Likewise, according to DHS officials and a DHS report on the Boston Marathon bombing, as news of the bombs spread, cell phone networks became congested with users and were largely unavailable for about 90 minutes. As shown in table 1, GETS had high call-completion rates during recent times of crisis. DHS officials told us that the current GETS will likely lose some functionality during the transition to an all-IP environment. The officials said they are planning a project that will provide priority for IP wireline access, but the project has not yet received approval for acquisition. In 2015, a multi-agency executive committee reported that the national security and emergency preparedness community must be able to rely on these priority services to complete their mission-essential communications in the IP environment. DHS is working on a program that is aimed at enabling users to have priority voice, data, and video communications as networks evolve, but according to DHS officials, data and video capabilities will not be available for several years. In the meantime, as telecommunications carriers transition from legacy networks to IP networks, key national security and emergency preparedness personnel might not be able to complete important GETS calls during times of crisis. CSRIC is currently assessing how priority services programs can take advantage of IP technologies and intends to recommend protocols that can be used to ensure priority communications upon the retirement of legacy services. As CSRIC noted, this is important since the federal government is losing priority capabilities that rely on networks that will eventually be replaced by IP-based infrastructure. According to FCC officials, CSRIC estimates that the recommendations on protocols and standards that can support the delivery of priority communications for first responders and national security personnel over IP networks will be complete in March 2017. New IP networks may no longer support other government and consumer public safety services and equipment that work in the existing legacy network. Examples of such items include alarm systems and 911 call center systems. According to the Alarm Industry Communications Committee, telecommunications carriers installing new IP services may prevent alarm signals from being transmitted, and some IP services may improperly encode alarm signals. In comments submitted to FCC, the Association of Public Safety Communications Officials International (APCO) noted that alarm systems and medical alert monitors need to be provided for under new IP networks. APCO commented that alarms and alerts are a critical part of the input into 911 call centers and any identified shortfalls or anomalies should be identified to ensure that any effect to the public or public safety is known well ahead of time. APCO also commented that copper replacements in the foreseeable future must accommodate existing 911 call centers in the relevant service area, including those that have not yet transitioned to IP-based systems. As discussed, that transition will not be immediate and continuity of operations with existing 911 systems is vital for public safety. In addition to addressing the specific challenges affecting IP networks during times of crisis described above, FCC has taken a variety of other actions to help ensure the overall reliability of IP networks, including the following: Proposed criteria in August 2015 to evaluate and compare the replacement of legacy services. FCC had not previously codified any specific criteria by which it evaluated the adequacy of substitute services, but proposed changes to the process in a further notice of proposed rulemaking. Specifically, FCC proposed that to be eligible for automatic grant of authority under FCC’s rules, a telecommunications carrier seeking to discontinue an existing retail service must demonstrate that any substitute service meet criteria related to (1) interoperability with devices and services, such as alarm services and medical monitoring; (2) support for 911 services and call centers; (3) network capacity and reliability; (4) quality of both voice service and Internet access; (5) access for people with disabilities, including compatibility with assistive technologies; (6) network security in an IP-supported network; (7) service functionality; and (8) coverage throughout the service area. In addition, FCC proposed to require that part of the evaluation to discontinue a legacy retail service should include whether the carrier has an adequate consumer education and outreach plan. FCC noted it believes establishing these criteria will benefit industry and consumers alike and will minimize complications when carriers seek approval for large scale discontinuances. It also noted that having clear criteria in place will better allow carriers to know how they can obtain approval for discontinuing legacy service once they are ready to do so. According to representatives from Public Knowledge, this organization had urged FCC to establish metrics to compare the services that carriers are discontinuing with replacement services. The organization’s representatives noted that without ensuring new services are actually substitutes for the services being phased out, there is a risk that entire communities could lose critical functionality in their communications networks. In the further notice of proposed rulemaking, FCC tentatively concluded that several of the criteria proposed by Public Knowledge are the appropriate criteria. Updated copper retirement rules and definitions to help ensure the public has the information needed to adapt to an evolving communications environment. FCC issued new rules in an August 2015 report and order that, among other things, require incumbent carriers to directly notify consumers of plans to retire copper networks to the customer’s premises without customer consent. In this report and order, FCC also updated its definition of copper retirement due to the frequency and scope of copper network retirement. Included in this definition is de facto retirement, i.e., the failure to maintain these copper lines that is the functional equivalent of removal or disabling. FCC noted that it made these changes in rules and definitions since the record developed in that proceeding reflects numerous instances in which notice of copper retirement has been lacking, leading to consumer confusion, and therefore consumers need direct notice for these important network changes that may directly affect them. Collected and analyzed network outage data, looking for trends, and communicated with telecommunications carriers. FCC developed and maintains the Network Outage Reporting System (NORS) for collecting confidential outage information from telecommunications carriers. These carriers are required to report information about disruptions or outages to their communications systems that meet specified thresholds. According to FCC, engineers on its staff monitor and analyze the outage reports in real time looking for trends in outages, communicate with carriers about outages, and produce a high-level network outage report. FCC officials told us that even though the outage information is not publicly reported, they believe the act of reporting helps network providers correct problems and that by combining multiple reports, FCC gains insight on network reliability and working with carriers cooperatively leads to better outcomes with fewer, less severe outages. FCC shares NORS reports with DHS’s Office of Emergency Communications, which may provide information from those reports to such other governmental authorities as it may deem to be appropriate. Otherwise, reports filed in NORS are presumed confidential and are thus withheld from routine public inspection. However, in March 2015, FCC proposed, among other things, granting states read-only access to those portions of the NORS database that pertain to communications outages in their respective states to advance compelling state interests in protecting public health and safety. Representatives from two state agencies and two consumer organizations we contacted told us that granting states access to outage reports would improve the overall reliability of communications networks by giving them additional information. Tracked the status of the restoration of communications in the event of a large scale disaster. FCC developed and maintains the Disaster Information Reporting System (DIRS), a voluntary system used by members of the communications sector intended to provide information on the status of restoration efforts to FCC and DHS. DIRS reports include information on major equipment failures and the service and geographic area affected. According to FCC officials, DIRS is only activated during major disasters, and since these incidents are unique, the system is not designed to track trends. For example, the officials said that DIRS is often activated during hurricanes, but because of differences in wind speed, direction, and other challenges, outages from one hurricane do not necessarily indicate infrastructure will be affected the same way in another hurricane. Chartered CSRIC to provide FCC with recommendations on ways to improve security, reliability, and interoperability of communications systems. FCC officials told us CSRIC has not specifically looked at ways to improve reliability of IP networks; however, there have been a number of working groups that aim to improve the overall reliability of telecommunications networks. Specifically, in September 2014, CSRIC issued a report and series of best practices for providing backup power to customers relying on IP networks and on consumer notification. DHS has also taken the following actions to help ensure the reliability of IP networks during times of crisis: Coordinated with other federal government agencies, owners and operators of communications networks, and state, local, tribal, and territorial governments. As the Sector Specific Agency for the communications sector, DHS manages the industry-government relationship, encourages private sector involvement through the involvement of the sector-coordinating councils, and maintains the Communications Sector Specific Plan. According to representatives of the Communications Sector Coordinating Council, the Council works closely with DHS, and they noted DHS is helpful in providing assistance for educational and outreach programs, including ensuring training opportunities occur when needed. DHS also coordinates with stakeholders by participating in CSRIC and by coordinating and serving as the Executive Secretariat support to the President’s National Security Telecommunications Advisory Committee—a presidential advisory group comprised of chief executives from major telecommunications companies, network service providers, and the information technology, and aerospace industries. Additionally, DHS’s Office of Emergency Communications provides coordination support by offering training, coordination, and tools to stakeholders. Coordinated the development and implementation of the 2010 Communications Sector Specific Plan and is currently working on an updated plan. The sector specific plan was developed by DHS, the Communications Sector Coordinating Council, and the Government Communications Coordinating Council and is intended to ensure the sector effectively coordinates with sector partners, other sectors, and DHS. According to representatives of the Communications Sector Coordinating Council, they met regularly with DHS to update the sector specific plan. The plan provides a framework for industry and government partners to establish a coordinated strategy to protect the nation’s critical communications infrastructure. Part of this framework includes conducting national risk assessments. With respect to communications, DHS issued a report entitled 2012 Risk Assessment Report for Communications, which according to the report, represents the culmination of a 2-year period during which 29 government and 32 industry sector partners assessed physical, cyber, and human risks of concern that could potentially affect local, regional, and national communications. According to DHS officials, the Communications Sector Coordinating Council and Government Communications Coordinating Council determined an updated risk assessment was not needed because details of the changing risk environment will be discussed and updated in other sector documents, such as the sector specific plan. DHS officials also told us the new plan should be completed in 2015 and will be updated to include the communications sector’s transition to IP networks and will include more focus on cybersecurity-related content. We did not evaluate the 2010 plan because it was being updated and did not evaluate the 2015 plan because it was not issued at the time of our review. Coordinated the development of the 2014 National Emergency Communications Plan. This plan aims to enhance emergency communications capabilities at all levels of government in coordination with the private sector, nongovernmental organizations, and communities. DHS developed recommendations to help meet the plan’s five broad goals related to (1) governance and leadership, (2) planning and procedures, (3) training and exercises, (4) operational coordination, and (5) research and development. According to the plan, DHS’s Office of Emergency Communications intends to coordinate with public safety agencies and emergency responders and will identify strategies and timelines to accomplish the plan’s goals, objectives, and recommendations and measure progress nationwide. In the private sector, telecommunications carriers have also worked to ensure their IP networks are functional during times of crisis in the following ways: Built resiliency and reliability into IP networks as part of business operations and planning for emergencies. According to DHS, as the owners and operators of the majority of the nation’s communications networks, private sector entities are responsible for protecting key commercial communications assets, as well as ensuring the resiliency and reliability of communications during day-to-day operations and emergency response and recovery efforts. In addition, commercial communications carriers have a primary role in network restoration during outages and service failures and support reconstitution for emergency response and recovery operations. Representatives of the three largest telecommunication carriers told us they are taking action at the company level to improve reliability because building reliability and resilience into networks are part of normal business operations. For example, these carriers have developed emergency preparedness plans for events such as hurricanes, to help ensure network reliability. These plans included pole replacement, decreased dependency on aerial facilities, and adding additional generators. Officials from one major carrier told us that customers expect the phone to work when they pick it up to make a call and that the company risks losing customers if it cannot provide reliable service. Participate in a variety of groups intended to provide information and improve the overall reliability of communications networks. For example, in addition to groups like CSRIC and the Communications Sector Coordinating Council described above, telecommunications carriers participate in other organizations such as the Alliance for Telecommunications Industry Solutions (ATIS). ATIS’s Network Reliability Steering Committee advises the communications industry through developing and issuing standards, technical requirements and reports, best practices, and annual reports. ATIS also launched a task force looking at how the IP transition affects public safety communications infrastructure. State authorities from three public utility agencies told us that they have taken action to ensure the reliability of IP networks. These actions include collecting consumer complaints, levying fines, reviewing outage data, and making recommendations for improvement. For example, officials at one state agency told us that they receive and investigate complaints and if an issue is identified levy fines or open a rulemaking proceeding. Officials at another state agency told us they review outage data and make recommendations for improvements based on lessons learned. According to the DHS’s 2010 Sector Specific Plan, the state Public Utility Commission is the primary authority for implementing regulations, and individual telecommunications carriers work directly with state authorities regularly to address regulatory issues. However, according to the National Regulatory Research Institute, more than half the states have made changes to their regulatory authority that reduced or eliminated retail telecommunications regulation. For example, one state agency told us that although the commission previously had a role in ensuring the reliability and robustness of the communications network, it no longer has that authority. FCC is collecting data on the IP transition and sought comment on collecting additional data on the transition’s effect on consumers, but could do more to ensure it has the information it needs to make data- driven decisions about the IP transition. The primary way FCC intends to gather information about the IP transition is through service-based experiments. In particular, FCC established a framework in January 2014 within which carriers can conduct voluntary service-based experiments. These voluntary experiments would allow telecommunications carriers to substitute new communications technologies for the legacy services over copper lines that they are currently providing to customers and to test a variety of approaches to resolving operational challenges that result from transitioning to new technology and that may affect users. According to FCC, these experiments are not intended to test technologies or resolve legal or policy debates. FCC established technical parameters for each experiment, including requiring each proposal to provide sufficiently detailed information about how the experiments will be designed to allow meaningful public comment and thorough evaluation of the proposed experiment. Specifically, each experiment proposal must include other information such as: the purpose and proposed metrics for measuring success; the scope of the experiment (geography, product, or service offering); the technical parameters including a description of any physical or network changes and how the experiment will affect customers and other providers and product or service offerings; and timelines. FCC noted it would find useful experiments that collect and provide data on key attributes of IP-based services, such as network capacity, 911 services and call centers, and cybersecurity. According to FCC officials, the voluntary experiments can begin without FCC approval; however, carriers planning to discontinue service have to seek permission from FCC prior to doing so. At the time of our review, the experiments were still in the early stages, and FCC had not approved the discontinuation of any existing services. As shown in figure 3, at the time of our review, AT&T proposed experiments in two locations and CenturyLink proposed one location. According to AT&T documents, initially AT&T plans to encourage voluntary migration to IP-based services for existing customers through outreach and education. Subsequently, AT&T plans to seek FCC approval to “grandfather” existing customers and offer only wireless and wireline IP-based services for new orders. The documents also note that eventually, those existing customers will also have to transition to such alternatives, but not until FCC has evaluated the results and approved AT&T to discontinue legacy service and move forward to the full IP transition. As part of the trials, AT&T plans on collecting and reporting to FCC information including data on the progress of the experiment, customer complaints, network performance, call quality, and issues relating to access by persons with disabilities. According to FCC officials, FCC intends to contract with a major research organization to collect and analyze data from the AT&T experiment locations. At the time of our review, FCC officials told us this data collection is expected to begin in several months. Unlike the AT&T experiments, CenturyLink submitted a proposal that does not directly affect consumers. Instead this experiment focuses on business end users and service providers, and according to CenturyLink’s own proposal, the experiment would be very narrow in scope. CenturyLink also noted that it was not seeking to discontinue any services or requesting a waiver of any FCC rules, even for the purposes of the experiment. FCC is taking and plans to take additional steps to collect information on how consumers are experiencing the IP transition. FCC officials said they have begun taking action to improve consumer complaint data and make them more transparent, including launching a new consumer help center intended to collect additional consumer complaint data and working with various groups to share this and other data. FCC also plans to work with state, local, and tribal governments to leverage existing data-collection efforts and develop common definitions, categories, and a metric that will allow for comparison of consumer experiences in different parts of the country and help create a more comprehensive picture of the consumer experience as networks transition. FCC sought comment on how it could supplement its data-gathering process on the effects of technology transitions beyond consumer complaints and inquiries. In light of the scale of the IP transition and the potential for disruptions to consumers and public safety, FCC recognizes it will need information on the effects of the transition to ensure IP communications networks are reliable. Federal standards for internal control, which provide the overall framework for identifying and addressing major performance and management challenges, stress the importance of obtaining information from external sources that may have a significant impact on an agency achieving its goals. Furthermore, in its January 2014 order, FCC noted that one of its statutory responsibilities is to ensure that its core values, including public safety and consumer protection, endure as the nation transitions to modernized communications networks. In the order, FCC noted that fulfilling this responsibility requires that FCC learn more about how the modernization of communications networks affects consumers. The order also states that FCC intends to collect data through the service-based experiments that would permit the making of data-driven decisions about the IP transition. However, it is unclear if FCC will be able to make data-driven decisions about the IP transition because of the limited number and scale of the proposed experiments. For example, one major carrier did not propose any experiments. Furthermore, as some organizations have commented, AT&T’s experiments have limitations including the small number of experiments; a lack of geographic dispersion; and a lack of diverse population densities, demographics, and climates. These experiments, as planned, will affect less than 55,000 living units combined, which according to Public Knowledge, likely represent approximately 0.07 percent of AT&T’s wireline customers. Additionally, the proposed experiments do not include high-density urban areas; areas with colder climates or mountainous terrains; or areas that encompass diverse populations. Finally, none of the proposed experimental areas includes critical national security or public safety locations, such as those serving Department of Defense or Federal Aviation Administration facilities. FCC’s other efforts related to data collection on the IP transition include enhancing consumer complaint data, leveraging existing data collection efforts at the state and local level, and seeking comments on how FCC could supplement its data-gathering process. However, it remains unclear if FCC can meet its information needs through these efforts. For example, as noted above, DHS officials expressed concerns about the priority services that national security and emergency preparedness personnel rely on during times of crisis, such as GETS, losing functionality in an IP environment. FCC may need additional information to help ensure that such personnel can continue to make important calls during times of crisis. Another area of uncertainty with the IP transition is the availability of 911 services and compatibility with medical devices and other equipment. In particular, according to AT&T, in its proposed experimental areas, approximately a third of customers who chose not to migrate to wireless service expressed concerns regarding 911 calls and compatibility with medical devices and other equipment. Furthermore, FCC’s solicitation of comments about the data-gathering process may not necessarily result in a change in FCC’s existing policies. Although FCC’s efforts to collect data represent a good start, we found FCC lacks a detailed strategy that outlines how it will address its remaining information needs, including determining what information from states and localities is available to be leveraged, a methodology for obtaining that information, and the resources required. As a result, FCC cannot ensure that it has the information necessary to make data-driven decisions about the IP transition. FCC has recognized the importance of collecting data that would enable it to make data-driven decisions about the IP transition and has sought comment on how it could supplement its data-gathering process. Nevertheless, at the time of our review, FCC had little information on the effect of the transition, namely because the service-based experiments— FCC’s primary method for collecting data on the transition—were very limited in number and scale, did not cover consumer services in urban areas, and did not include critical national security or public safety locations. Although FCC has other data collection efforts under way, it is unclear whether FCC’s efforts will address its remaining information needs, especially those related to the functionality of priority services and 911 availability. Developing a strategy for collecting information about how the IP transition affects public safety and consumers would help FCC address these areas of uncertainty as it oversees the IP transition and enable FCC to make data-driven decisions. To strengthen FCC’s data collection efforts, the Chairman of FCC should develop a strategy to gather additional information on the IP transition to assess the transition’s potential effects on public safety and consumers. We provided a draft of this report to FCC and DHS for their review and comment. FCC provided written comments, reproduced in appendix II and technical comments, which we incorporated as appropriate. DHS provided technical comments, which we incorporated as appropriate. In written comments, FCC did not state whether it agreed or disagreed with our recommendation that it develop a strategy to gather additional information on the IP transition to assess the transition’s potential effects on public safety and consumers. FCC stated that it agreed with us about the importance of ensuring an informed, data-driven process for determining which services can be seamlessly supported during the IP transition, which services will need to be transformed, and which services will no longer be supported in an IP world, while preserving FCC’s core functions of public safety, universal service, competition, and consumer protection. FCC noted that it is essential that it have sufficient information to make informed decisions and further stated that it has a comprehensive data strategy in place to oversee the IP transition. According to FCC, its strategy for overseeing the transition combines traditional regulatory approaches with innovative methods that match the dynamism of the communications environment. FCC stated that the service-based experiments are by no means the sole means by which FCC is overseeing the IP transition and provided examples of actions it has taken to oversee the transition. For example, FCC stated that it took the following actions, which we had already highlighted in our report: enhanced its notification process for retirement of copper facilities; provided clear direction to industry concerning the circumstances in which approval must be sought before removing a service from the marketplace; collected NORS disruption data; and engaged with the private sector and other relevant stakeholders through FCC’s federal advisory committees, including CSRIC. In the letter, FCC also stated that it had taken action on some issues that were outside the scope of our review, including revising information it obtains from states on the states’ collection and use of 911 fees and maintaining a “Text-to-911 Registry.” While these actions are useful for FCC to oversee the IP transition, we continue to believe that FCC needs to develop a strategy to gather additional information on the potential effects of the IP transition. Especially with respect to the priority services that national security and emergency preparedness personnel rely on during times of crisis, by having a strategy to collect additional information on the IP transition, FCC could help ensure that such personnel can continue to make important calls during times of crisis. Furthermore, as AT&T noted, some residential customers have expressed concerns regarding 911 availability and compatibility with medical devices and other equipment in an IP environment. Developing a strategy to collect additional information on the transition’s effects could help FCC address these areas of uncertainty. We are sending copies of this report to the Chairman of FCC, the Secretary of Homeland Security, and appropriate congressional committees. In addition, the report is available at no charge on GAO’s website at http://www.gao.gov. If you or members of your staff have any questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix III. This report examines the reliability of the nation’s communications networks in an Internet Protocol (IP) environment. Specifically, we reviewed (1) the potential challenges affecting IP networks during times of crisis and how the challenges affect end users, and (2) the actions FCC, DHS, and other stakeholders have taken to ensure the reliability of IP communications during times of crisis. To identify challenges affecting IP networks and how the challenges affect end users, we reviewed relevant documents from the Federal Communications Commission (FCC) and Department of Homeland Security (DHS) including orders, notices and proposed rulemakings, reports, and risk assessments, as well as relevant statutes and regulations. We reviewed comments filed with FCC regarding the IP transition and emergency communications. To ensure we reviewed a broad range of comments, we selected comments by stakeholders that represented a variety of interests, including public interest groups, industry and trade associations, and state and local authorities. We reviewed reports and best practices from federal advisory committees, trade associations, and consumer groups. We reviewed our prior recommendations, as well as those made by DHS, the Communications Security, Reliability, and Interoperability Council, and the National Security Telecommunications Advisory Committee related to priority telecommunications services. We also searched various Web-based databases to identify existing articles, peer-reviewed journals, trade and industry articles, government reports, and conference papers. We identified articles from 2010 to 2015. We examined summary-level information about the literature identified in our search that we believed to be germane to our report. It is possible that we may not have identified all of the reports with findings relevant to our objective, and there may be other challenges affecting IP networks during times of crisis that we did not present. To determine the actions taken by FCC, DHS, and other stakeholders to ensure the reliability of IP communications during times of crisis, we reviewed relevant FCC proceedings, reports, and documents. Specifically, we reviewed FCC proceedings related to technology transitions and ensuring consumer backup power for continuity of communications, reports on disruptions to communications reports on major disruptions to 911-related communications, and documents related to outage-reporting information. To identify information on the proposed IP transition experiments, we reviewed AT&T and CenturyLink’s proposals, stakeholder comments submitted to FCC on these proposals, and other documents related to the experiments. We assessed FCC’s efforts to collect data on the effect of the IP transition against criteria established in the federal Standards for Internal Control. We reviewed relevant DHS documents including the 2013 National Infrastructure Protection Plan, the 2010 Communications Sector Specific Plan, and the 2012 Risk Assessment Report for Communications. We also reviewed reports and best practices from the Communications Security, Reliability, and Interoperability Council and the Alliance for Telecommunications Industry Solutions. To obtain additional information on the challenges affecting IP networks and how these challenges affect end users, and to obtain information on state efforts to ensure reliability we selected locations in six states—New York, New Jersey, Arizona, California, Florida, and Alabama—to provide additional details. We selected these locations because they represent a mix of communities that experienced a major communications outage since 2012 or contain an area with a proposed IP transition experiment. These regions also contain a mix of rural, suburban, and urban communities, and demographics including economic differences and average age of residents. We reviewed documents such as reports, comments to FCC, and comments to state agencies. We interviewed officials from state Public Utility Commissions or similar agencies including the New York Department of Public Service, New Jersey Board of Public Utilities, California Public Utilities Commission, and Florida Public Service Commission. We interviewed representatives from other organizations that had experienced the effects of outages or were involved with the proposed IP transition experiments including the City of Flagstaff, the Arizona Telecommunications and Information Council, the Broadband Alliance of Mendocino County, and the Communications Workers of America. We interviewed officials from FCC and DHS and representatives from AT&T, Verizon, and CenturyLink. We also interviewed representatives from selected stakeholder groups including trade and industry associations and consumer and public interest groups, as shown in table 2. We identified stakeholders to interview based on our review of comments filed in FCC’s Technology Transitions proceeding, as well as based on recommendations from other organizations we interviewed. In addition to the individual named above, Sally Moino (Assistant Director), Richard Calhoon, David Hooper, Michael Kaeser, Aaron Kaminsky, Malika Rice, Amy Rosewarne, and Andrew Stavisky made key contributions to this report. Federal Information Security: Agencies Need to Correct Weaknesses and Fully Implement Security Programs. GAO-15-714. September 29, 2015. Cybersecurity: Recent Data Breaches Illustrate Need for Strong Controls across Federal Agencies. GAO-15-725T. June 24, 2015. Cybersecurity: Actions Needed to Address Challenges Facing Federal Systems. GAO-15-573T. April 22, 2015. Information Security: IRS Needs to Continue Improving Controls over Financial and Taxpayer Data. GAO-15-337. March 19, 2015. Information Security: FAA Needs to Address Weaknesses in Air Traffic Control Systems. GAO-15-221. January 29, 2015. Information Security: Additional Actions Needed to Address Vulnerabilities That Put VA Data at Risk. GAO-15-220T. November 18, 2014. Information Security: VA Needs to Address Identified Vulnerabilities. GAO-15-117. November 13, 2014. Federal Facility Cybersecurity: DHS and GSA Should Address Cyber Risk to Building and Access Control Systems. GAO-15-6. December 12, 2014.
The communications sector is essential to the nation's economy and government operations and for the delivery of public safety services, especially during emergencies. As the sector transitions from legacy networks to IP-based networks, consumer and public safety groups and others have raised concerns about how the communications networks will function during times of crisis. GAO was asked to examine the reliability of the nation's communications network in an IP environment during times of crisis. GAO examined (1) the potential challenges affecting IP networks in times of crisis and how the challenges may affect end users, and (2) the actions FCC, DHS, and other stakeholders have taken to ensure the reliability of IP communications. GAO reviewed FCC and DHS documents as well as FCC proceedings and comments filed with FCC on the IP transition and emergency communications. GAO assessed FCC's efforts to collect data on the effect of the IP transition. GAO interviewed officials from FCC and DHS, and representatives from the three largest telecommunications carriers, industry associations, and public interest and consumer advocacy groups. As the nation's telecommunications systems transition from legacy telephone networks to Internet Protocol (IP)-based networks, telecommunications carriers can face challenges during times of crisis that affect end users' ability to call 911 and receive emergency communications. These challenges include (1) preserving consumer service and (2) supporting existing emergency communications services and equipment. For example, during power outages, consumers with service provided over IP networks and without backup power can lose service. The Federal Communications Commission (FCC) is working to address this issue by adopting rules that will require carriers to provide information to consumers on backup power sources, among other things. Another challenge is that IP networks may not support existing telecommunications “priority” services, which allow key government and public-safety officials to communicate during times of crisis. FCC, the Department of Homeland Security (DHS), and telecommunications carriers have taken various steps to ensure the reliability of IP communications, for example: FCC proposed criteria—such as support for 911 services, network security, and access for people with disabilities—to evaluate carriers' replacement of legacy services when carriers seek to discontinue existing service. DHS coordinated the development of the Communications Sector Specific Plan to help protect the nation's communications infrastructure. Carriers told GAO they build resiliency and reliability into their IP networks as part of business operations and emergency planning. FCC is also collecting data on the IP transition, but FCC could do more to ensure it has the information it needs to make data-driven decisions about the transition. FCC has emphasized that one of its statutory responsibilities is to ensure that its core values, including public safety capabilities and consumer protection, endure as the nation transitions to modernized networks. FCC stated that fulfilling this responsibility requires learning more about how the transition affects consumers. FCC plans on collecting data on the IP transition primarily through voluntary experiments proposed and run by telecommunications carriers. However, it is unclear if FCC will be able to make data-driven decisions about the IP transition because of the limited number and scale of the proposed experiments. In particular, there are only three proposed experiments that cover a very limited number of consumers; none of the experiments covers consumer services in high-density urban areas or includes critical national-security or public-safety locations. FCC also sought comment on how to supplement its data-gathering process; however, soliciting comments may not necessarily result in a change in FCC's existing policies. GAO found FCC lacks a detailed strategy that outlines how it will address its remaining information needs. Developing a strategy for collecting information about how the IP transition affects public safety and consumers would help FCC make data-driven decisions and address areas of uncertainty as it oversees the IP transition. FCC should strengthen its data collection efforts to assess the IP transition's effects. FCC did not agree or disagree with the recommendation and stated it has a strategy in place to oversee the IP transition. However, GAO continues to believe FCC should strengthen its data collection efforts.
Offset arrangements are not new to military export sales. The use of offsets, specifically coproduction agreements, began in the late 1950s and early 1960s in Europe and Japan. A country’s desire to coproduce portions of weapon systems was based on needs such as maintaining domestic employment, creating a national defense industrial base, acquiring modern technology, and assisting its balance of payments position. In 1984, we reported that offsets were a common practice and that demands for offsets on defense sales would continue to increase. The United States is the world’s leading defense exporter and held about 52 percent of the global defense export market in 1994 (the latest year for which statistics are available). Offsets are often an essential part of defense export sales. Offset agreements may specify the level of offset required, normally expressed as a percentage of the sales contract. Offset agreements may also specify what types of activity are eligible for offset credit. Offset activities that are directly related to the weapon system sold are considered “direct” offset, while those involving unrelated defense or nondefense goods or services are considered “indirect.” An offset may directly relate to the weapon system being sold or to some other weapon system or even nondefense goods or services. Countries may also include conditions specifying the transfer of high technology and where and with whom offset business must be done. Other provisions include requirements that offset credit be granted only for new business and that credits be granted only if local content exceeds a minimum level. Negotiating offset credit is an important part of implementing offset agreements. Countries can grant additional offset credit to encourage companies to undertake highly desirable offset activities. For example, countries may offer large multipliers for advanced technology or training that can greatly reduce a company’s cost of meeting its offset obligation.However, a country can also establish criteria that make it difficult for a company to earn offset credit. Some countries, such as the United Kingdom and the Netherlands, cite restrictions in the United States and other defense markets and note that their offset policies are needed to ensure that their defense industries are given an opportunity to compete. The United States does not require offsets for its foreign military purchases, but it does have requirements that favor domestic production. The Defense Production Act of 1950 allows the Secretary of Defense to preserve the domestic mobilization base by restricting purchases of critical items from foreign sources. While not precluding foreign suppliers, regulations implementing the Buy America Act of 1933 allow price preferences for domestic manufacturers, and annual Department of Defense (DOD) appropriation acts sometimes contain prohibitions on foreign purchases of specific products. The General Agreement on Tariffs and Trade (GATT) prohibits the practice of offsets in government procurement, except for procurement of military weapons. In 1990, the North Atlantic Treaty Organization (NATO) proposed a code of conduct for defense trade to regulate offsets in military exports, but did not adopt it. In addition, reciprocal memorandums of understanding between the United States and several major allies include provisions to consult on the adverse effects of offsets. Over the last 10 years, the countries in our study have increased their demands for offsets, begun to emphasize longer term offset projects and commitments, or initiated offset requirements. All the countries in our review have increased their offset demands on U.S. companies to achieve more substantial economic benefits. Canada, the Netherlands, Spain, South Korea, and the United Kingdom have all had offset policies since at least 1985. These countries are using new approaches in their offsets to increase economic benefits. These changes include targeting offset activities and granting offset credit only for new business rather than existing business. For example, Canada and the United Kingdom are less willing to grant offset credit for a company’s existing business in the country, and South Korea has increased its demands for technology transfer and training as part of any offset agreement. Since 1990, Kuwait, Taiwan, and the United Arab Emirates have all established a new policy for offsets on foreign military purchases. They are now using offsets to help diversify their economies or promote general economic development. Although these countries are new entrants, company officials said they are knowledgeable about the defense market, and their offset policies can be equally as demanding as countries with existing offset policies. For example, the United Arab Emirates requires 60 percent of the value of the contract to be offset by commercially viable business ventures and grants offset credit based only on the profits generated by these investments. Singapore and Saudi Arabia have both recently reinstated their offset policies. Both countries have intermittently required offsets since the 1980s. However, company officials said these countries now regularly pursue offsets on their defense purchases. Saudi Arabia’s new approach is less formal and relies on best effort commitments from companies rather than formal agreements. Previously, some of the countries in our review allowed companies to meet offset obligations with existing business in the country or with one-time purchases of the country’s goods. A country’s requirements for direct offsets were sometimes met through projects calling for the simple assembly of weapon systems components. These types of offset activities often did not result in any long-term economic benefits. More recently, buying countries have changed their offset strategies in an attempt to achieve lasting economic benefits. Countries such as Kuwait and the United Arab Emirates are seeking offset activities that will help create viable businesses, increase local investment, or diversify the economy. Countries such as Canada, the Netherlands, and the United Kingdom are trying to form long-term strategic relationships with the selling companies to generate future work, instead of always linking offset activities to individual sales. The types of offsets required by the countries in our review depend on their offset program goals and the country’s economy—whether it is developed, newly industrialized, or less industrialized. Companies undertake a broad array of activities to meet these offset obligations. A country’s offset requirements policy outlines the types of offset projects sought by the country. All 10 countries in our review now have offset requirements. These requirements include the amount of offset required (expressed as a percentage of the purchase price); what projects are eligible for offset credit; how these projects are valued (e.g., offering multipliers for calculating credit for highly desired projects); nonperformance penalties; and performance periods. Countries in our study with developed economies encourage offsets related to the defense or aerospace industries. These offsets typically involve production and coproduction activities related to the weapon system being acquired but could also involve unrelated defense or aerospace projects. These countries have well-established defense industries and are using offsets to channel work to their defense companies, thus supporting their defense industrial base. Canada, the Netherlands, Spain, and the United Kingdom are all in this group. We reviewed 40 offset agreements, with a stated value of $5.6 billion, between U.S. companies and countries with developed economies. The following are highlights from these agreements: The agreements with the United Kingdom reflected its focus on defense, requiring that offsets be satisfied through British companies certified by the government as performing defense-related work. A majority of the agreements required that 100 percent of the sale be offset, although the percentage specified in the agreements ranged from 50 percent to 130 percent. The offset agreements with the Netherlands focused on defense-related or high-technology nondefense projects and specify a minimum local content threshold before full offset credit will be granted. Such local content requirements effectively increased the amount of business activity required to generate credit. Most of the agreements required 100 percent of the sale to be offset with a range of 45 percent to over 130 percent. Coproduction of defense systems is a feature found in some of the offset agreements with Spain. These agreements specified the particular products that would be procured from Spain’s defense industry as part of the offset program. The offset percentage required in these agreements ranged from less than 30 percent to over 100 percent. The offset agreements with Canada showed the country’s focus on encouraging U.S. procurement and other arrangements with Canadian suppliers in defense, aerospace, and other high-technology industries. Most of the agreements also included requirements that contractors place work throughout the Canadian provinces and also specified that a portion of the offset be done with small businesses. The offset percentage required in these agreements ranged from less than 40 percent to 100 percent. The following are examples of the offset projects that both U.S. and foreign firms have implemented or proposed in these developed economies: The German company Krauss-Maffei agreed to coproduce tanks in Spain to offset Spain’s purchase of 200 Leopard 2 main battle tanks. (Countertrade Outlook, Vol. XIII, No. 16, Aug. 21, 1995, p.10.) Lockheed will establish a Canadian firm as an authorized service center for C-130 aircraft to satisfy offset obligations for its sale of C-130s to Canada. This will ensure that the Canadian firm has ongoing repair and overhaul work for this aircraft. Lockheed will also procure assemblies and avionics in Canada for its C-5 transport aircraft. (Countertrade Outlook, Vol. XIII, No. 10, May 22, 1995, p.3.) McDonnell Douglas will offset the United Kingdom’s purchase of Apache attack helicopters (valued at nearly $4 billion) by producing much of the aircraft in the United Kingdom, with British equipment. U.S. suppliers are committed to buying $350 million worth of British equipment for U.S.-built Apache helicopters. In addition, Westland Helicopters, a United Kingdom firm, has the potential to get up to $955 million worth of sales for future support services for Apache helicopters worldwide. (Defense News, Aug. 21-27, 1995, p. 12.) Most U.S. companies we reviewed did not have significant difficulty meeting defense-related offsets in Canada, the Netherlands, and the United Kingdom because those countries have well-established defense industries. In addition, many of the companies have significant existing business in these countries, often making it easier for the companies to implement offset projects. Meeting Spain’s offset demands was more difficult because its defense industry is not as advanced as other Western industrialized countries. Some of the U.S. companies in our review expressed concern about the impact of defense-related offsets on the U.S. defense industry, particularly the loss of production to U.S. defense subcontractors and suppliers. Appendix I provides detailed information on the terms of the offset agreements and the requirements for each developed country we reviewed. Countries in our study with developing defense and commercial industries, such as South Korea, Singapore, and Taiwan, have pursued both defense-related and nondefense-related offsets. Offsets in these countries typically involve technology transfer in defense or comparable high-technology industries. They see offsets as a means to further develop their defense base and economy. We reviewed 31 offset agreements, with a stated value of $5.1 billion, with countries that have newly industrialized economies. The following are highlights from these agreements: The agreements with South Korea emphasized work in the defense and aerospace industries, particularly the transfer of related high technology. Many agreements included multipliers to encourage work in these sectors. Many also required the purchase of unrelated products for export resale in the United States and other markets. Offset agreements generally required at least a 30-percent offset with a range of less than 30 percent to more than 60 percent. The offset agreements with Singapore focused on defense-related offset projects, including direct production of parts for purchased weapon systems. The offset percentage required in these agreements ranged from 25 percent to 30 percent. In contrast to other newly industrialized countries, the agreements with Taiwan focused on commercial projects aimed at developing long-term supplier relationships with foreign firms. The agreements offered multipliers for technology transfer, training, and technical assistance reflecting the priority the government places on these activities. These agreements all called for a 30-percent offset goal. The following are examples of the offset projects that both U.S. and foreign firms have implemented or proposed in these newly industrialized economies: Dassault, as part of an offset arrangement for the $3.5-billion sale of Mirage fighter aircraft to Taiwan, agreed to form partnerships with firms in Taiwan to transfer technology and manufacture equipment for civilian markets. (Jane’s Defence Weekly, Sept. 2, 1995, p.17.) Lockheed-Martin, as part of its offset obligation for the sale of 150 F-16 fighter aircraft to Taiwan, is seeking suppliers in Taiwan for repair contracts for more than 500 aircraft components. Taiwan regards the offset program as an opportunity to (1) become a regional aviation maintenance center and (2) obtain similar work on another aircraft under development by Lockheed-Martin. (Countertrade Outlook, Vol. XIII, No. 13, July 10, 1995, p.4.) Lockheed-Martin Tactical Aircraft Systems, formerly the General Dynamics Fort Worth Company, is in the process of satisfying South Korea’s offset requirements on the purchase of 120 F-16 fighter aircraft through several aerospace projects. These projects include codevelopment of a new trainer aircraft, training, transfer of castings and forgings technology, and repair and overhaul of aerospace equipment. As part of the sale, General Dynamics agreed to transfer relevant manufacturing and assembly know-how to allow South Korea to manufacture 72 aircraft and assemble an additional 36 aircraft from kits that were manufactured in the United States. The remaining 12 aircraft were to be completely assembled in the United States. U.S. companies generally considered the offset requirements of Singapore and Taiwan to be manageable. However, company officials noted that despite the relatively low percentage of offset required in South Korea, these requirements can be as difficult as a 100-percent offset requirement. Appendix II provides detailed information on the offset requirements of each newly industrialized country and the terms of the offset agreements we reviewed. Countries with less industrialized economies, such as Kuwait, Saudi Arabia, and the United Arab Emirates, generally pursue indirect offsets to help create profitable businesses and build their country’s infrastructure. These countries usually do not pursue direct offsets because they have limited defense and other advanced technology industries and are not interested in attracting work that would require importing foreign labor. The United Arab Emirates’ new offset policy grants credit only for profits generated rather than the value of the investment. We reviewed five offset agreements, with a value of at least $1.6 billion, with countries that have less industrialized economies. The following are highlights of the agreements we reviewed: The agreements with Kuwait required that 30 percent of the sales be offset through investment projects, including infrastructure development. Kuwait’s offset policy grants multipliers up to 3.5 for investments in high priority areas. The agreements with Saudi Arabia were informal and did not require a specified offset percentage. The agreements primarily called for nondefense-related investment projects. The agreements required joint ventures between Saudi Arabian and foreign companies and assigned values to technology transfers at the cost the country would have incurred to develop them. The agreements with the United Arab Emirates required that 60 percent of the sale be offset through nondefense-related investment projects and granted multipliers for various types of investment projects. The following are representative examples of the offset projects that both U.S. and foreign firms have implemented or proposed in these less industrialized economies: Several French firms have established manufacturing facilities or other investments in the United Arab Emirates to satisfy offset obligations. For example, Thomson-CSF started a garment manufacturing enterprise in Abu Dhabi in connection with a contract for tactical transceivers and audio systems. Giat Industries created an engineering company specializing in air conditioning as part of its offset commitment for the United Arab Emirates’ purchase of battle tanks. (Countertrade Outlook, Vol. XIII, No. 8, Apr. 24, 1995, pp.3-4.) McDonnell-Douglas Helicopter Company entered into several joint ventures with firms in the United Arab Emirates to satisfy offset commitments for the sale of AH-64 Apache helicopters. Projects included forming a company to manufacture a product that cleans up oil spills and creating another firm that will recycle used photocopier and laser computer printer cartridges. The defense contractor is also paying for a U.S. law firm to draft the country’s environmental laws. (Countertrade Outlook, Vol. XIII, No. 2, Jan. 23, 1995, pp. 2-3.) General Dynamics and McDonnell-Douglas contracted with companies in Saudi Arabia to satisfy offset obligations from several weapons sales. In one case, a Saudi firm will manufacture circuit boards for tanks, while in another instance, a Saudi company will manufacture components for F-15 fighter aircraft. (Countertrade Outlook, Vol. XIII, No. 6, Mar. 27, 1995, p. 5.) The United Arab Emirates is working with Chase Manhattan to establish an off-shore investment fund to provide international contractors doing business in the country the opportunity to satisfy part of their offset obligations. (Countertrade Outlook, Vol. XIII, No. 2, Jan. 23, 1995, p. 1.) Some company officials commented that indirect offsets make more sense for the countries than defense-related offsets. Although U.S. companies generally found meeting offset demands in Kuwait and Saudi Arabia manageable, some companies expressed concern over the limited number of commercially viable investment opportunities in these countries. Further, the United Arab Emirates’ offset demands were seen as particularly costly and impractical since offset credits were based on profits actually generated by the newly established enterprise. Appendix III provides detailed information on the offset requirements of each less industrialized country and the terms of the offset agreements we reviewed. Views on the effects of offsets are divided between those who accept offsets as an unavoidable part of doing business overseas and those who believe that offsets negatively affect the defense industrial base and other U.S. interests. It is difficult to accurately measure the impact of offsets on the overall U.S. economy and on specific industry sectors that are critical to defense. Company officials told us that without offsets, most export sales would not be made and the positive effects of these exports on the U.S. economy and defense industrial base would be lost. Offsets help foreign buyers build public support for purchasing U.S. products, especially since weapon procurement often involves the expenditure of large amounts of public monies on imported systems. Other company officials indicated that export sales provide employment for the U.S. defense industry and orders for larger production runs, thus reducing unit costs to the U.S. military. They also noted that many offset deals create new and profitable business opportunities for themselves and other U.S. companies. Critics charge that offsets have effects that limit or negate the economic and defense industrial base benefits claimed to be associated with defense export sales. Mandated offshore production may directly displace U.S. defense firms that previously performed this work, and offsets that transfer technology and provide marketing assistance give foreign defense firms the capabilities to subsequently produce and market their products, often in direct competition with U.S. defense companies. According to company officials, indirect offsets involving procurement, technology transfer, marketing assistance, and unrelated commodity purchases may harm nondefense industries by establishing and promoting foreign competitors. Defense exports involving offsets are small relative to the economy as a whole, making it difficult to measure any effects using national aggregated data. Similarly, the impact of offsets on specific sectors of the U.S. economy cannot be accurately measured because reliable data on the number and size of offset agreements and the transactions used to fulfill these offsets are not readily available. In addition, it would be difficult to isolate the effects of offsets from numerous other factors affecting specific industry sectors. According to officials from large defense firms and an association representing U.S. suppliers, reliable information on the impact of offsets is difficult to obtain because company officials are generally not aware that a particular offset arrangement caused them to lose or gain business. Only limited anecdotal information from these companies is available. The lack of reliable information is a long-standing problem. Recognizing the need for more information, Congress required in 1984 that the President annually assess the impact of offsets. The President tasked the Office of Management and Budget (OMB) to coordinate these assessments and submit a report to Congress. However, OMB was not able to accurately measure the impact of offsets on U.S. industry sectors critical to defense with the information it collected. The Defense Production Act Amendments of 1992 directed the Commerce Department to take the lead in assessing the impact of offsets. As part of this effort, the statute requires companies to submit information on their offset agreements that are valued at $5 million or more. Commerce plans to issue its first report in 1996. In response to concerns raised about the impact of offsets, the President issued a policy statement in 1990 that reaffirmed DOD’s standing policy of not encouraging or participating directly in offset arrangements. This policy statement also recognized that certain offsets are economically inefficient and directed that an interagency team, led by DOD in coordination with the Department of State, consult with foreign nations on limiting the adverse effects of offsets in defense procurement. In 1992, Congress adopted this policy as part of the Defense Production Act Amendments. According to the Commerce Department, DOD and the State Department have not consulted with foreign nations on the adverse effects of offsets as detailed in the 1990 presidential policy statement or the 1992 law. However, in 1990, as part of the discussions over the NATO Code of Conduct for defense trade, U.S. officials proposed to limit offsets in defense trade, but no action was taken because countries could not agree to the Code. DOD took action to include, as part of memorandums of understanding between the United States and its allies, a provision to consult on the adverse effects of offsets. DOD has discussed offsets on a case-by-case basis with several countries in the context of specific weapon sales. Commerce officials noted that offsets are driven by the demands of foreign governments against private U.S. companies. These demands place second and third tier U.S. suppliers at a disadvantage since their interests are not usually represented in these sales. Commerce officials said that DOD should take action, in accordance with the 1990 presidential policy, to consult with other nations to limit the adverse effects of offsets. One DOD official noted that negotiating the offset issue by itself would not give the United States a strong bargaining position because of U.S. reluctance to change Buy America and small business preferences. According to the Commerce Department, industry is not opposed to the initiation of consultations on offsets, but is concerned that the U.S. government might unilaterally limit the use of offsets. Officials from several large defense companies we interviewed also expressed concern about any unilateral action by the U.S. government that would limit offsets. Similarly, several officials expressed doubt that any multilateral agreement limiting offsets would be enforceable, and some noted that any ban would likely force offset activity underground. In addition, some company officials said that unilateral action banning offsets or an unenforceable multilateral agreement would place U.S. exporters at a competitive disadvantage in winning overseas defense contracts. Commerce and DOD officials agreed that unilateral action to limit offsets could harm U.S. defense companies. The Departments of Commerce, Defense, and State were given the opportunity to comment on a draft of this report. The Department of Commerce provided written comments (see app. IV) and the Departments of State and Defense provided oral comments. Commerce said our report provides a balanced view of the subject. State commented that the report accurately describes the growth in offset demands and the requirements countries place on their purchases of foreign military equipment. DOD concurred with our report and commented that it should contribute to a better understanding of the nature of offset demands and the role of offsets in military export sales. We have made minor technical corrections to the report where appropriate based on suggestions provided by Commerce and Defense. To assess how countries’ offset requirements have evolved and how companies were meeting these obligations, we focused our analyses on 10 countries. We selected these countries based on their geographic distribution and their significant purchases of foreign military equipment. We then visited nine major U.S. defense companies. These firms were chosen based on their roles as prime contractors and subcontractors that provide a full range of defense goods and services. We interviewed company officials regarding each country in our study and obtained the offset agreements that they entered into with these countries since 1985. For the limited number of agreements that we could not obtain, we relied on summarized data provided by the company. Due to the proprietary nature of the offset agreements, we are limited in our ability to present specific information on a particular contract. However, to illustrate the types of offset projects U.S. and foreign companies undertook in the countries we reviewed, we used examples from various defense journals. We did not corroborate the information reported in these journals. To determine what each country’s offset policy required, we interviewed company officials and reviewed each country’s requirements, as provided by the companies in our study. We then reviewed other government studies that examined offset requirements for these countries. We did not discuss these policies with officials from each country to confirm their accuracy. To examine the implications of offsets on the U.S. economy, we examined studies of defense offsets performed by other U.S. government agencies and other groups. We interviewed DOD, Commerce, and State officials on offset trends and any U.S. actions taken regarding offsets. We also interviewed officials from prime contractors as well as trade associations that represent mostly smaller U.S. companies. The companies in our study were cooperative and provided the information we requested in a timely manner. However, our ability to fully review the actual offset projects was affected by access restraints. This information is considered commercially sensitive by defense companies, and information on projects implementing the offset agreements was selectively provided by the companies. The companies reviewed our report to ensure that no sensitive information was disclosed. We conducted our review from May 1995 to February 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to interested congressional committees and the Secretaries of Defense, State, and Commerce. We will also make copies available to other interested parties upon request. Please contact me at (202) 512-4587 if you or your staff have any questions concerning this report. Major contributors to this report were Karen Zuckerstein, Davi D’Agostino, David C. Trimble, Tom Hubbs, and John Neumann. Canada seeks offsets through its Industrial and Regional Benefits policy to develop and maintain the capabilities and competitiveness of Canadian companies. It solicits offsets that will benefit its manufacturing and advanced technological capabilities, including technology transfer, investments in plants or productivity improvement, and coproduction with Canadian suppliers. Offset agreements generally range from 75 percent to 100 percent of the weapon systems contract’s value. Most offsets involve purchasing products from Canadian firms in the defense, aerospace, or other high-technology industries. The official guidelines do not state a threshold for requiring offsets, and offsets have been provided on contracts with values as low as $12 million. Canada is distinctive in its emphasis on distributing offset projects across its various regions, particularly in its lesser-industrialized Western and Atlantic provinces. Most offset agreements require regional distribution, including several that specify which suppliers and regions should receive offset work. In addition, some agreements contain penalty provisions for not achieving a certain percentage of offset in each Canadian region. Many offset agreements also specify that small businesses must receive a portion of the offset projects. Several agreements included detailed requirements for determining the amount of offset credit. For example, offset projects will receive credit only if the minimum Canadian content requirement is met, which was 35 percent in several of the agreements. Also, offset credit will only be granted for new business or increases in existing business. Companies are now usually not able to get offset credit for existing business in the country, as they were in the past. Generally, the companies in our study did not have significant difficulty meeting offset requirements in Canada. Several companies found the defense-related offsets easy to implement because Canada has a developed defense industry and the companies have a significant amount of existing business in the country. Table I.1 summarizes Canada’s offset guidelines and agreements. Generate long-term industrial benefits. Agreements generated long-term industrial benefits with an emphasis on the defense and aerospace industries. Value of contracts with offsets started at $12 million. Both direct and indirect offsets are accepted, with emphasis on high-technology industries. Many agreements show preference for offsets related to defense or aerospace industries. Recent agreements required offsets ranging from 75 percent to 100 percent of the contract value. Two agreements provided for 20-percent additional credit for an increase in direct offset amount. Banking permitted in several agreements. Penalties varied from 2.5 percent to 12 percent of shortfall. Several agreements did not have penalties. Ranged from less than 5 years to over 10 years. Several agreements had yearly milestones for completing offset commitments. Several agreements required a minimum of 35-percent Canadian content to receive any offset credit. Request offset projects that promote regional and small business development and provide subcontracts to Canadian suppliers. Most agreements included regional distribution and small business requirements. Several recent agreements specified the actual suppliers to be used in carrying out offset agreements. Recent agreements only provided offset credit for new business. Several agreements have high administrative oversight to determine if offset resulted in new business and met Canadian content and other requirements. Banking refers to the practice of allowing companies to earn extra offset credit under one offset agreement and save or “bank” those credits to satisfy a later offset obligation. The Netherlands uses offsets to maintain and promote its technical capabilities in defense and other industries. The country has a well-established defense industry and requires offsets that are related to defense or high-technology civilian industries. The defense-related offsets typically involve coproduction of components, parts and assemblies, and technical services rendered by Dutch firms. Nondefense-related offsets include a wide range of activities designed to contribute to the Netherlands industrial base, including purchasing products from Dutch firms in the aircraft, automotive, electronics, optical, or shipbuilding industries. The Netherlands’ guidelines require offsets on all weapons contracts valued at more than $3 million. The standard offset demand is 100 percent, and the majority of agreements over the last 10 years reflect this requirement. Many of the agreements require that 70 percent to 85 percent of any product purchased be produced in the Netherlands in order to receive full credit toward the offset obligation. In addition, several recent agreements state that credit will only be granted for new business created or an increase in existing business. Company representatives told us that implementing defense-related offsets in the Netherlands is not a problem, given the country’s sophisticated and highly developed industrial base. Several companies identified offsets as a critical factor in winning a contract in the Netherlands and believe the country would choose a less-desired weapon system to get a better offset package. Table I.2 summarizes the Netherlands’ offset guidelines and agreements. Maintain and increase the industrial capacity of the defense industry. Most agreements included defense-related offsets. All defense contracts valued at more than $3 million require offsets. All agreements exceeded the official offset threshold. Both direct and indirect offsets are accepted, with emphasis on dual-use (military and civilian) technology. Agreements showed preference for direct offsets or indirect offsets in the defense or other technologically equivalent industry. Government seeks 100-percent offset. Most agreements over last 10 years required 100-percent offset. Multipliers are rarely included. However, according to company officials, the amount of credit granted for an offset project can be negotiated, achieving the same results as a multiplier. Banking permitted in several agreements. Penalties not stated. However, according to a May 1995 press report, the Netherlands legislature requested that penalties be included in one offset agreement. Ranged from 4 years to 15 years. Milestones are generally not included in the agreements. Most agreements required a minimum of 70-percent local content to receive 100-percent offset credit. Some agreements specified the actual suppliers to be used in carrying out the offset agreement or required that a portion of the offset activities be fulfilled by collaboration with small- and medium-sized businesses. Require indirect offsets to include new business or a significant increase in existing orders. Several agreements specified that offset credit would be granted only for new business or an increase in business. Spain uses offsets on defense orders to support and develop its defense industry. Although Spain does not have written offset guidelines, it does have a policy of demanding offsets, including coproduction by designated Spanish firms, technology transfer, and export of Spanish defense products. Spain’s standard offset requirement is 100 percent; however, the agreements over the last 10 years have ranged from 30 percent to 100 percent of the value of the weapon system. Spain does not have a stated threshold amount for requiring offsets, but all of the offset agreements over the last 10 years were for weapons sales over $7 million. In some agreements, Spain has included provisions to only credit offset projects that create new business or represent an increase in existing business, and not grant credit for companies’ current business in the country. In addition, Spain has sometimes included a local content requirement for offset projects, providing credit only for the portion of the projects that are produced in Spain. Companies report that to get approval for offset projects, the work usually has to be spread across various Spanish regions, even though the agreements do not explicitly contain this requirement. In addition, Spain has targeted specific Spanish companies that it wants to get offset work. One U.S. company said offsets were relatively easy to implement in Spain because Spain’s participation has consisted of producing less sophisticated components. Another company observed that offsets are more difficult to implement in Spain than in other European countries because of Spain’s less diverse industrial base. Table I.3 summarizes Spain’s offset guidelines and agreements. Has official offset policy, but not written guidelines. Provide support for Spain’s defense industry. Some agreements reflected goal of providing opportunities for defense industry. Agreements were for contracts valued at over $7 million. Emphasis on defense-related offsets. Agreements reflected preference for offsets in the defense industry, including coproduction, technology transfer, and export of Spanish defense products. 100 percent is the standard offset demand. Agreements required from 30-percent to 100-percent offset. Some agreements included multipliers for technology and production licenses and joint development programs. Banking excess credits common. Generally requires penalties. Some agreements included penalties ranging from 3 percent to 5 percent of offset commitment shortfall. Ranged from 5 years to 8 years, with grace periods sometimes included. Only one agreement had milestones. Sometimes grants credit only for value of local content. Included in some agreements. Sometimes specifies regional or supplier requirements. Some agreements specified the actual supplier to be used in carrying out offset agreement. In addition, companies are encouraged to spread offset projects out over Spanish regions. Some agreements required regular reporting of offset implementation status. The United Kingdom uses offsets to channel work to its defense companies. The country has a well-established defense industry and requests offsets that are related to defense, including production, technology transfer, capital investment, and joint ventures. Offset agreements focus on procurement of defense-related products and services from British firms. According to the country’s guidelines, offsets are not mandatory, but are used as an assessment factor in contract evaluations. Offsets are commonly sought from North American companies and on a case-by-case basis from European companies. Offsets are encouraged for weapon sales worth more than $16 million. A majority of the agreements required 100 percent of the sale to be offset. Some companies stated that implementing defense-related offsets in the United Kingdom is not a problem, given the country’s sophisticated and diverse industries and the significant amount of existing business these companies have in the country. However, several recent agreements specify that offset credit will be given only for new business or a verifiable increase in existing business, based on a prior 3-year average. A company’s existing business in the country is not eligible for offset credit. Furthermore, recent agreements specify that any purchase orders or subcontracts for offset credit must be placed with one of the companies on the country’s registry of recognized defense companies. However, this is not a problem for U.S. companies partly because many British firms are on the registry. Table I.4 summarizes the United Kingdom’s offset guidelines and agreements. Table I.4: United Kingdom—Offset Guidelines and Agreements Compensate for loss of work to the United Kingdom’s defense industrial sector. Agreements reflected guidelines’ goal to provide work to the defense industrial sector. All defense contracts valued at more than $16 million require offsets. Most were for contracts valued above the threshold amount. All offsets must be defense-related. Agreements reflected requirement for defense-related offsets. Government seeks 100-percent offset. Offset percentage ranged from 50 percent to 130 percent. Most agreements required at least 100-percent offset. Offset credit can be negotiated. Offset credit can be negotiated. For example, one agreement provided for “extra credit” if a specific offset project was undertaken. Permitted in certain circumstances. Banking permitted in most agreements. No penalties; agreements call for “best efforts” to fulfill. Not to exceed the delivery period of the contract. Ranged from 3 years to 13 years. Not stated. Offset activities must be placed with a qualified United Kingdom defense manufacturer. Such companies are listed in a central registry and are from various regions of the country. Most agreements specified that offset credit would only be granted for work with recognized United Kingdom defense contractors. Offset activities must be new and consist of products not previously purchased, products purchased from new suppliers, or new contracts for existing business valued at over $50,000. Several recent agreements specified that offset credit would be granted only for new business or an increase in business. Offset proposals commonly submitted at time of contract tender for approval. No other mention of oversight. Several agreements required regular reporting of offset activity progress. Staff to review offset credit. Singapore uses offsets to build its capability to produce, maintain, and upgrade its defense systems. It has required offsets on an ad hoc basis since the mid-1980s, but has recently begun to consistently demand offsets. Singapore’s official policy requires all major purchases to be offset with a 30-percent offset performance goal. All the offset arrangements we reviewed emphasized defense-related projects. These arrangements required producing components for the weapon system being purchased or establishing a Singaporean firm as a service center for a weapon system. Singapore seeks technology transfer and training, and most offset agreements include multipliers or provide credits in excess of contractor costs for highly desired projects. For example, manufacturing technology transferred for one weapon system was valued at several times the cost to the company to provide it. Generally, companies that had offset agreements with Singapore considered the requirements manageable. Table II.1 summarizes Singapore’s offset guidelines and agreements. Assist the Ministry of Defense in building up Singapore’s capabilities to provide necessary maintenance, production, and upgrade capability to support equipment and systems the Ministry has procured. To be accomplished through technology transfer, technical assistance, participation in research and development, and marketing assistance. Consistent with the guidelines. All “major” purchases of equipment, material, and services; however, the guidelines do not provide a specific threshold. All the agreements we reviewed were for sales valued at over $5 million. Direct offset is preferred but indirect offset is acceptable. Most included a mix of direct and indirect offset transactions. At least 30 percent of main contract value, expressed as a goal. Ranged from 25 percent to 30 percent. Some agreements provided multipliers for activities such as technology transfer (valued at up to 10 times the cost), training, or technical assistance. Permitted banking in most agreements. 10 percent of unfulfilled obligation. 3 to 5 percent of unfulfilled obligation. Concurrent with the duration of the main contract up to a maximum of 10 years, plus a 1-year grace period. Agreements are generally consistent with the guidelines. Generally not stated. Firms owned by the Ministry of Defense are given first preference on bidding for work with U.S. contractors. Agreements are generally consistent with the guidelines. The Ministry of Defense is very involved in selecting Singaporean firms that U.S. defense contractors must work with. South Korea uses offsets to acquire advanced technologies for its defense and commercial industry. Technology transfer and related training has consistently been a high priority for South Korea, and it has received increased emphasis in recent years as South Korea has developed its aerospace industry. To obtain technology transfer and training, South Korea grants multipliers and awards offset credit that exceeds the actual cost to the company of providing these items. As a result of U.S. government pressure to reduce offset demands in the late 1980s, South Korea’s policy calls for a 30-percent offset on defense purchases exceeding $5 million. Although some agreements required a 30-percent offset, others required an offset of 40 percent or higher. South Korea has a preference for defense-related offsets, but is also willing to accept a wide variety of indirect offsets to help develop its industry, especially its aerospace industry. In addition, South Korea frequently has required U.S. contractors to buy products, such as forklifts and printing press parts, for export resale that were unrelated to the weapon system being purchased. Several U.S. companies indicated that it can be difficult to work with South Korea. They noted that the 30-percent offset requirement is tougher to satisfy than the old 50-percent requirement and can be as tough as a 100-percent requirement. Several company officials also noted that they have had difficulty in not being allowed to use banked credits. However, some contractors commented that South Korea was consistent in its requirements and would negotiate if the U.S. company was trying to meet its offset obligation. Table II.2 summarizes South Korea’s offset guidelines and agreements. Offset requirements first begun before 1985. Latest version published in January 1992. Acquire key advanced technologies required for defense and commercial industry research and development and production; enhance depot maintenance capability; enhance opportunities for manufacturing equipment and its components; and provide opportunities to repair and overhaul foreign military equipment and to export defense-related products. Agreements were generally consistent with the guidelines. However, certain offset projects had no relationship to the weapon systems being purchased. Military procurements exceeding $5 million are subject to offset. Several offset agreements prior to 1992 involved contracts that were below the current $5-million threshold. In addition, according to one contractor, South Korea combined two separate purchases into one contract to reach the offset threshold. Direct offset is preferred, but indirect offset is acceptable. Agreements were generally consistent with the guidelines and reflected a willingness to accept indirect offset, especially involving technology transfer and training, that will contribute to economic development. At least 30 percent of contract value. Since 1985, agreements have generally required at least a 30-percent offset—and frequently more. Limited use of multipliers. Facilities, equipment, and tooling provided by the contractor free of charge are given a multiplier of two times their actual cost. Several offset agreements provided multipliers that were larger than the published guidelines, especially for technology transfer and training. For example, providing on-the-job training for South Korean engineers at a U.S. contractor’s plant was valued at 10 times the cost of providing the training. Banking excess credits allowed in several individual agreements, but most were silent on banking. 10 percent of unfulfilled obligation. Agreements were consistent with the guidelines. Generally corresponds to the performance period for the main contract. Agreements were generally consistent with the guidelines. Agreements occasionally required and paralleled overall contract performance periods. Many agreements were prescriptive and specified the South Korean partners to be used by U.S. contractors or the exact training to be provided by the U.S. contractor to South Korean workers. Agreements frequently required U.S. contractors to buy South Korean products for export resale that had no relationship to the contract. Taiwan instituted its offset policy about 1993. Taiwan uses offsets to encourage private investment, upgrade its industries, and enhance international competitiveness. Taiwan’s goal is to form long-term supplier relationships with foreign companies, using training and technology transfer to gain expertise. Taiwan emphasizes these areas by offering large multipliers for such projects. For example, the agreements included multipliers as high as 25 for technology transfer, while other activities such as purchases from local firms received no or very low multipliers. Company officials noted that Taiwan recently passed a requirement calling for 30-percent offsets. Taiwan’s offset guidelines are broad, laying out several categories of industrial cooperation and methods to achieve it—from production of weapon system components to local investment. Offset agreements appear flexible, with projects targeted to areas considered strategic for economic development. In contrast to South Korea and Singapore, Taiwan generally prefers commercial offset projects rather than defense-related projects. Although some agreements include defense-related offset projects, such as coproduction of weapons components, the agreements more commonly involve commercial projects, such as marketing assistance. Generally, the companies we visited believe that Taiwan’s offset requirements have been easily managed. Table II.3 summarizes Taiwan’s offset guidelines and agreements. All are after date of guidelines. To achieve the timely introduction of key technologies and high-tech industries to Taiwan. Targeted industries include aerospace, semiconductors, advanced materials, information products, precision machinery and automation, and advanced sensors. Agreements are consistent with the guidelines. To be determined on a case-by-case basis; both civilian and military government procurements are subject to offset. The smallest contract we reviewed with an offset requirement was for about $60 million. Both direct and indirect offsets are acceptable. Agreements reflected preference for indirect offset; they either required indirect offset only or were heavily weighted toward indirect. To be determined on a case-by-case basis. However, company officials noted that Taiwan’s legislature passed a law in 1994 requiring 30-percent offsets. Most of the agreements we reviewed required 10-percent offset with an additional 20 percent expressed as a goal; however, the most recent agreement required 30-percent offset. Range from 2 for local purchases to 10 for technology transfer. Multipliers provided for a broad range of transactions—technology transfer, training, technical assistance, marketing assistance, investments, and joint ventures—valued at between 2 and 25 times the cost of the service provided. Most agreements do not specifically discuss banking excess credits. None. Guidelines based on good faith. However, the policy notes that a contractor’s track record in fulfilling an offset obligation is considered when awarding future contracts. Agreements did not include penalties. Concurrent with master contract. All agreements had a 10-year performance period. Not stated. Goal is to participate in long-term supplier relationships, using training and technology transfer to gain expertise. Guidelines are broad, laying out several categories of industrial cooperation and methods to achieve it—from production of weapon system components to local investment. Consistent with guidelines, the offset projects were targeted to areas considered “strategic” to economic development. In 1992, Kuwait began requiring offsets for all defense purchases over $3 million. Kuwait pursues offsets that will generate wealth and stimulate the local economy through joint ventures and other investments in the country’s infrastructure. The limited number of agreements we reviewed call for U.S. contractors to propose investment projects and then manage and design the projects selected by the Kuwaiti government. The agreements required offsets equal to 30 percent of the contract values, as stated in Kuwait’s offset policy. U.S. companies have had limited experience with Kuwait’s offset program to date, but generally consider it manageable. Table III.1 summarizes Kuwait’s offset guidelines and agreements. Offset policy instituted in July 1992. Revised guidelines issued in March 1995. All are after the institution of the 1992 guidelines. Promote and stimulate the local economy. Agreements are consistent with program goals. Offset threshold is about $3 million. Exceed threshold. Indirect offsets. Agreements involved indirect offsets. 30 percent of the value of the contract. Agreements required 30-percent offset. The relative value of multipliers reflect Kuwait’s preference for capital expenditures, research and development, training, and increased export sales of locally produced goods and services (multipliers of 3.5). Other activities are given smaller multipliers. Not stated. Allowed up to 100 percent of offset obligation. Banking permitted. 6 percent of unfulfilled obligation. Not stated. Not stated. 50 percent of the offset should be completed within 4 years. Not stated. Long-term investment through joint ventures is encouraged. Agreements reflected interest in developing viable businesses. Saudi Arabia has intermittently required offsets since the mid-1980s. Officials at one company observed that Saudi Arabia has recently pursued “best effort” agreements with U.S. defense contractors, rather than formal offset agreements. Saudi Arabia uses its offset policy to broaden its economic base and provide employment and investment opportunities for its citizens. The offset agreements are informal with no set offset percentage, although officials at one company estimated their arrangement was equivalent to a 35-percent offset agreement. The agreements include a requirement that companies enter into joint ventures with local companies to implement offset activities. The offset activities consist of defense- and nondefense-related projects. In some instances, the offset projects include local production of parts or components for the weapon system being purchased. However, these represent small portions of the overall offset projects, and the Saudi government agreed to pay price differentials to make Saudi manufacturers price competitive. The agreements do not include explicit multipliers, but some agreements grant credits for technology transfers at the cost Saudi Arabia would have incurred to develop the technology. Companies commented that Saudi Arabia wants to establish strategic partnerships and long-term relationships with its suppliers and that the Saudi government has been fairly flexible in negotiating offset agreements. Table III.2 summarizes Saudi Arabia’s offset guidelines and agreements. Table III.2: Saudi Arabia—Offset Guidelines and Agreements 1990-93 (One prior agreement in 1988.) Broaden the economic base, increase exports, diversify the economy, transfer state-of-the-art technology, and provide investment opportunities for Saudi Arabian investors. Agreements were consistent with program goals. Not stated. Offset applies to both military and civil federal procurement. Agreements were associated with high-dollar value contracts. Indirect offsets are preferred. Mostly indirect offsets that were unrelated to defense. 35 percent of contract value. Agreements were consistent with the requirement or called for “best efforts” commitment. Offset credit for training Saudi Arabian nationals will be given at two times the contractors’ cost (i.e., a multiplier of two). No other multipliers cited. Not stated. However, technology transfers were valued at the cost Saudi Arabia would have incurred to develop the technology, plus the value of future benefits. Not stated. Not stated. Agreements generally called for “best efforts” as part of Saudi Arabia’s desire to establish long-term relationships. 10 years. Not stated. Oil- and gas-related projects are not eligible for credit. Offset activity involved mostly nondefense-related projects unrelated to the oil and gas industry. Should be 50 percent of total offset obligation. Joint ventures sought between foreign and Saudi firms; foreign firm’s ownership share may decrease to 20 percent by end of 10 years. Agreements required joint ventures, but appeared to be less formal than published guidelines. Agreements cited specific Saudi Arabian firms for joint venture partners. The United Arab Emirates first instituted its offset policy in 1990. In 1993, it issued new requirements granting offset credit only for the profits generated by offset projects. The policy requires a 60-percent offset on all contracts valued at $10 million or more. The United Arab Emirates uses offsets to generate wealth and diversify its economy by establishing profitable business ventures between foreign contractors and local entrepreneurs. The United Arab Emirates is interested in a wide range of nondefense-related offset projects. Company officials generally questioned the feasibility of the United Arab Emirates’ current offset requirements. They said only a small number of viable investment opportunities exist and such projects take several years to generate profits. Table III.3 summarizes the United Arab Emirates’ offset guidelines and agreements. Table III.3: United Arab Emirates—Offset Guidelines and Agreements New guidelines issued about 1993. Prior guidelines dated 1990. All after the institution of the 1990 requirements. Generate wealth by creating commercially viable businesses through partnerships with local entrepreneurs. Agreements were consistent with guidelines in effect at the time. For all “substantial” defense procurement. Requirements specifically cite a $10-million threshold for any government procurement. All agreements exceeded the threshold. Policy implies nondefense, wealth-generating investments are preferred. The policy explicitly discourages, however, labor-intensive projects. Agreements involved indirect offsets unrelated to defense. At least 60 percent of the value of the imported content. All agreements required a 60-percent offset. Not mentioned under current policy. Credit is based on profit generated rather than a valuation (using multipliers) of the investment in the project. The 1990 policy permitted multipliers. Some agreements that pre-date the new offset policy included multipliers that reflected the United Arab Emirates’ preference for investment. Banking of offset credits is permitted. Agreements permitted banking of offset credits and buying of excess credits from other companies. 8.5 percent of the unfulfilled obligation. Consistent with guidelines. Some agreements exceeded the 7-year performance period requirement. To be negotiated for each offset proposal. Agreements included milestones throughout the obligation. Companies must demonstrate that offset ventures are new work or extensions of existing activities. Agreements required projects to be preapproved for eligibility and offset credit. May require financial investment in an offset development fund in lieu of conventional offsets. Chase Manhattan is working to set up a United Arab Emirates investment fund. According to company officials, the fund will require a minimum $5-million investment for at least 10 years, with a guarantee of at least a 2.5-percent return. The country will provide 20-percent offset credit against investments in the fund. Offset credit for technology transfer, training, parts production, and all offset projects is granted based on the profits generated by these activities rather than the contractor’s implementation cost. Company officials noted that this requirement was impractical. Trade Offsets in Foreign Military Sales (GAO/NSIAD-84-102, Apr. 13, 1984). Foreign Military Sales and Offsets (Testimony, Oct. 10, 1985). Military Exports: Analysis of an Interagency Study on Trade Offsets (GAO/NSIAD-86-99BR, Apr. 4, 1986). Security Assistance: Update of Programs and Related Activities (GAO/NSIAD-89-78FS, Dec. 28, 1988). Defense Production Act: Offsets in Military Exports and Proposed Amendments to the Act (GAO/NSIAD-90-164, Apr. 19, 1990). Military Exports: Implementation of Recent Offset Legislation (GAO/NSIAD-91-13, Dec. 17, 1990). U.S.-Korea Fighter Coproduction Program—The F-16 Version (GAO/NSIAD-91-53, Aug. 1, 1991). Military Sales to Israel and Egypt: DOD Needs Stronger Controls Over U.S.-Financed Procurements (GAO/NSIAD-93-184, July 7, 1993). Military Aid to Egypt: Tank Coproduction Raised Costs and May Not Meet Many Program Goals (GAO/NSIAD-93-203, July 27, 1993). Military Exports: Concerns Over Offsets Generated With U.S. Foreign Military Financing Program Funds (GAO/NSIAD-94-127, June 22, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed offset requirements associated with military exports, focusing on: (1) how the offset goals and strategies of major buying countries have changed; (2) the offset requirements of these countries and how they are being satisfied; and (3) the impact of offsets and any action taken by the U.S. government. GAO found that: (1) demands for offsets in foreign military procurement have increased in selected countries; (2) countries that previously pursued offsets are now demanding more; (3) countries are requiring more technology transfer, higher offset percentages, and higher local content requirements to offset their foreign military purchases; (4) further, countries that previously did not require offsets now require them as a matter of policy; (5) the offset strategies of many countries in GAO's study now focus on longer term offset deals and commitments; (6) this shift highlights these countries' use of offsets as a tool in pursuing their industrial policy goals; (7) the types of offset projects sought or required by buyer countries in GAO's review depend on their offset program goals, which in turn are driven by their industrial and economic development needs; (8) companies are undertaking a broad array of activities to satisfy offset requirements; (9) countries with established defense industries are using offsets to help channel work to their defense companies; (10) countries with developing defense and commercial industries pursue both defense- and nondefense-related offsets that emphasize the transfer of high technology; (11) countries with less industrialized economies often pursue indirect offsets as a way to encourage investment and create viable commercial businesses; (12) views on the impact of offsets on the U.S. economy and specific industries are divided; (13) measuring the impact of offsets on the economy as well as specific defense industries is difficult without reliable data; (14) the Department of Commerce is currently gathering additional information on the impact of offsets and is expected to issue a report in 1996; (15) to date, the executive branch agencies have consulted with other countries about certain offsets associated with individual defense procurements, but have not had an interagency team hold the broad-ranging discussions on the ways to limit the adverse impacts of offsets as called for in a 1990 presidential policy statement; (16) according to the Commerce Department, industry is not opposed to the initiation of consultations, but is concerned about unilateral U.S. government actions to limit the use of offsets; and (17) moreover, representatives from several defense companies expressed doubt about the government being able to enforce restrictions on offsets.
The American Recovery and Reinvestment Act of 2009 (Recovery Act) created the Recovery Board composed of Inspectors General to promote accountability by overseeing recovery-related funds. The board was to do so, in part, by providing the public with easily accessible information. The Recovery Act appropriated $84 million for the Recovery Board to carry out its duties and set a termination date of September 30, 2013, for its oversight activities. The act provided the Recovery Board with the following specific powers and functions: audit and review spending on its own or in collaboration with federal OIGs; issue subpoenas to carry out audit and review responsibilities; refer instances of fraud, waste, and mismanagement to federal OIGs; hold public hearings and compel testimony through subpoenas; enter into contracts with public agencies and private entities; review whether there are sufficient and qualified personnel overseeing Recovery Act funds; and make recommendations to federal agencies on measures to prevent fraud, waste, and mismanagement of Recovery Act funds. To fulfill its mandate under the Recovery Act, the Recovery Board utilized data analytics to carry out its oversight responsibilities and increase accountability. Data analytics is a term typically used to describe a variety of techniques that can be used to analyze and interpret data to, among other things, help identify and reduce fraud, waste, and abuse. Specifically, predictive analytic technologies can be used to identify potential fraud and errors before payments are made, while other techniques, such as data mining and data matching of multiple databases, can identify fraud or improper payments that have already been awarded, thus assisting agencies in recovering these dollars. In October 2009, the Recovery Board established the ROC to analyze the use of Recovery Act funds by employing data analytics, specifically to provide predictive analysis capability to help oversight entities focus limited government oversight resources, based on risk indicators such as programs previously identified as high-risk, high-dollar-value projects, past criminal history of key parties involved in a project, and tips from citizens; and in-depth fraud analysis capability using public information to identify relationships between individuals and legal entities. The ROC served as a centralized independent repository of tools, methods, and expertise for identifying and mitigating fraud, waste, and mismanagement of Recovery Act funds and the associated parties through the use of such predictive and other analytic technologies. The Recovery Board’s assets supporting the ROC include human capital, hardware, data sets, and software. (See fig. 1 for a description of the ROC’s assets.) Subsequent legislation expanded the Recovery Board’s mandate to include oversight of other federal spending, including those funds appropriated for purposes related to the effects of Hurricane Sandy. In addition to expanding its authority, the legislation extended the termination date of the Recovery Board from September 30, 2013, to September 30, 2015. Figure 2 illustrates the timeline of legislation authorizing the Recovery Board and any corresponding appropriations. As we reported in our July 2015 testimony describing the progress made in the initial implementation of the DATA Act, the ROC has provided significant analytical services to its clients, including many OIGs, in support of their antifraud and other activities. Specifically, on the basis of the ROC’s client-service performance data that we reviewed, as part of the ROC’s analysis supporting investigations and audits, the ROC researched roughly 1.7 million entities associated with $36.4 billion in federal funds during fiscal years 2013 and 2014 at the request of various OIGs and other entities. As described below, examples of such research include Appalachian Regional Commission OIG audits of high- risk grantees and Department of Homeland Security OIG oversight of debris-removal contracts following Hurricane Sandy. The largest single user of ROC assistance over this time was the Appalachian Regional Commission OIG in fiscal year 2012 and the Department of Homeland Security OIG in fiscal years 2013 and 2014 (see fig. 3). The ROC developed specialized data-analytic capabilities to better ensure federal spending accountability. Since January 2012—after the Recovery Board’s mandate was expanded to address federal funds beyond those authorized by the Recovery Act—over 50 federal OIGs and agencies have asked for assistance from the center. Two major tools the ROC used on behalf of the OIGs included (1) link analysis and (2) unstructured text mining: Link analysis assists analysts in making connections by visually representing investigative findings. Link-analysis charts visually depict how individuals and companies are connected, what awards an entity has received, and how these actors may be linked to any derogatory information obtained from multiple data sets. Such tools, when combined with enhanced Geographic Information System capabilities, enable ROC analysts to conduct geospatial analysis by displaying data from multiple data sets on maps to help them make linkages and discover potential problems. (See figs. 4 and 5 for an example of link analysis and two visualizations of the data.) Although link analysis can be applied to a wide range of subjects, the ROC often applied this tool to issues that involved law-enforcement-sensitive data that the Recovery Board had authority to handle. The figure below shows an example of a request made by a federal agency to investigate Subject Company 1 as a delinquent federal debtor. Analysis by the ROC included checking entities identified against relevant events and associations, which include debarments, criminal history, and other factors. The initial review of Subject Company 1 determined that it was the recipient of an award under the Recovery Act totaling $6.4 million from a federal government agency. Further review of this company determined that it was not registered in the System for Award Management and had not previously received any federal awards. ROC analysts identified a news article that explained that this company had been created as a joint venture between Subject Company 2 and an individual, Subject 1. Analysis of Subject Company 2 revealed that four of its employees were previously indicted for fraud in 2006 and three of them were placed on the Excluded Parties List System, debarring them from receiving federal contracts. As part of the same analysis as in figure 4, the ROC determined Subject 1 was listed as the Registered Agent of 42 companies with vague names in a variety of industries. Geospatial analysis, represented in figure 5 below, determined that 15 of the companies were registered at the individual’s home address in Florida. The other 27 companies were registered in Gary, Indiana, at the same address as Subject Company 1. Geospatial analysis identified the address as a vacant lot in an industrial area. In another example, the Environmental Protection Agency OIG used the ROC’s data visualizations of a link analysis that identifies relationships among entities involved in activities such as collaborating to commit fraud. An Environmental Protection Agency OIG official said that the visualization of these relationships made it easier for juries to understand how entities had collaborated in wrongdoing. The ROC’s unstructured/structured text mining tools were developed to proactively identify high-risk entities. This tool uses key words or phrases to rapidly filter through thousands of documents and pinpoint high-risk areas to uncover trends and conduct predictive analysis across agencies, programs, and states and to identify and assign weights to risk factors or concepts. The Appalachian Regional Commission OIG used the results of the ROC’s unstructured text-mining analyses to identify the highest-risk grantees for review by analyzing text from A-133 Single Audit data to search for indications of risk, such as when a material finding was identified in the audit. A Single Audit includes an audit and an opinion on compliance with legal, regulatory, and contractual requirements for major programs as well as the auditor’s schedule of findings and questioned costs. The unstructured text tool could be directed to identify entities such as grantees that had previous negative audit findings and that therefore could represent a higher risk for using grant funding. This approach allowed the Appalachian Regional Commission OIG to better identify grantees subject to risk and facilitate the establishment of risk based priorities to allocate audit and investigative resources. According to a commission OIG official, this allowed the OIG to establish risk-based priorities to allocate audit and investigative resources. In addition to these examples, OIGs highlighted using the ROC’s analytic capabilities to identify fraud, waste, and abuse in federal spending in the following ways: In fiscal year 2013, the Department of Homeland Security OIG submitted 104 entities receiving debris-removal contracts totaling $329 million from 32 cities in New York and New Jersey to the ROC for further research. The ROC analysts’ review was forwarded to the department OIG for appropriate investigative or audit follow-up actions. ROC analysts also provided a risk analysis of these entities to the department OIG for use in planning future audits. Findings submitted to the department OIG included identification of the following: Debris-removal firms whose owners had federal and state tax liens. Firms previously listed on the Excluded Parties List System, indicating potential financial problems. Two companies that received contracts despite having filed for Chapter 7 bankruptcy in December 2010 and having federal tax liens totaling more than $1 million since 2011. Organizations with previous fraudulent activities receiving debris- removal contracts from cities where there was an indication the company heads had relationships with city officials. In part on the basis of these findings, the Department of Homeland Security OIG opened three criminal investigations involving Hurricane Sandy Public Assistance program funds, and the Department of Homeland Security OIG Emergency Management Office conducted four Hurricane Sandy–related audits. The ROC assisted the Department of Housing and Urban Development OIG with information confirming allegations that a loan guarantee specialist had sold HUD-owned properties for less than fair market value to shell companies that he owned and operated, stealing over $843,000 in federal funds. Due in part to the analysts’ efforts, the employee pleaded guilty to wire fraud for his involvement and was sentenced to 26 months in jail. In May 2015, Treasury officials told us that the department did not plan to exercise its discretionary authority to establish a data-analysis center or expand an existing service under the DATA Act. Officials explained that transferring the ROC assets would not be cost-effective or add value to Treasury operations that identify, prevent, and recover improper payments and identified the following principal concerns regarding the utility of transferring ROC assets: Hardware. Treasury officials viewed hardware, such as monitors and servers, as being feasible to transfer, but raised questions about whether it was cost-effective to do so because the ROC’s hardware is aging, lessening the value of these assets. In addition, they noted that hardware requires software support contracts to be functional, and as discussed below such contracts are not transferrable. Human capital (personnel). Federal personnel rules would not allow a direct transfer of ROC staff to Treasury. Instead Treasury would have to advertise and hire for these positions using the competitive hiring process, which can be time-consuming. In addition, because some ROC personnel were term-limited hires or contractors, a competitive hiring process would not guarantee that ROC staff would ultimately be selected for employment. Data sets. The ROC obtained access to federal datasets through memorandums of understanding (MOU), which neither Treasury officials nor ROC officials believed could be transferred. Instead Treasury would have to negotiate new MOUs with the federal agency that owned a data set Treasury wished to use. Commercially procured data sets also are not transferrable but would instead have to go through a procurement process. Software contracts. Because the Recovery Board extended its software contracts on a sole-source basis when it was reauthorized 2 years ago, Treasury would need to use a competitive procurement process to obtain these data-analytic tools again. Such processes can be time-consuming and lengthy. In July 2015, Treasury’s Fiscal Assistant Secretary testified before the House Committee on Oversight and Government Reform’s Subcommittee on Government Operations and Subcommittee on Information Technology and addressed the point mentioned above that while the DATA Act authorized Treasury to transfer the Recovery’s Board’s assets, the act did not transfer the Recovery Board’s authorities to Treasury. For instance, the Recovery Board was granted law enforcement authorities available under the Inspector General Act of 1978, which allowed the Recovery Board to negotiate relevant access so that the ROC could handle, analyze, and store law-enforcement-sensitive data, including evidence to support grand jury investigations. Similarly, the Recovery Board had special hiring authority that allowed it to select and employ term-limited hires, which provided the Recovery Board greater flexibility in selecting individuals with specific technical expertise and experience. Treasury officials noted that the DATA Act did not transfer the specific mission of the Recovery Board to Treasury, and that, combined with the absence of law-enforcement authority, created a barrier to fulfilling an identical role as that of the ROC. Although Treasury officials identified cost and other practical challenges to transferring ROC assets, it has an opportunity to transfer information and documentation that could support Treasury’s efforts to prevent improper payments—particularly, information on the design of data sharing agreements and requests for software contracts for analytic tools. Treasury officials told us that they believe the ROC’s most-valuable asset is its expertise and said they sought opportunities to informally leverage the ROC knowledge base in several ways. These efforts centered on sharing knowledge between the ROC and Treasury’s Do Not Pay Center Business Center (DNP), which assists federal agencies in preventing improper payments and leverages some of the same analytical methodologies as the ROC. Some of these efforts include the following: Leveraging the knowledge of ROC staff by applying their skills to similar analytic challenges facing DNP. For example, officials stated that the current Director of Outreach & Business Process for DNP is the former Assistant Director for Data and Performance Metrics at the Recovery Board. Her responsibilities at the Recovery Board included the assessment and testing of several prototype systems to support the work of the ROC and its external users in the oversight community. The capabilities of these systems were very similar to the capabilities of the DNP systems, in that entity names were matched against open-source databases to identify high-risk vendors. Officials also noted that another Recovery Board employee was hired to DNP, where she uses her knowledge of the root causes of improper payments to help agencies utilize DNP services more efficiently. Documenting business processes, procedures, and lessons learned. DNP is also working with the Recovery Board to document business processes and procedures, and lessons learned, as appropriate, in order to incorporate best practices into Treasury’s improper-payment prevention infrastructure. Treasury officials provided documentation of the timeline for obtaining information from the ROC through several meetings and indicated that they were in the process of documenting this information. Treasury officials said they considered DNP as a possible host of the ROC’s assets but ultimately concluded that the transfer of ROC assets to Treasury would not be cost-effective or add value to Treasury’s efforts. Officials explained that Treasury already provides services, such as those provided by DNP and Treasury’s Philadelphia Financial Center to agencies and OIGs to assist in the identification, prevention, and recovery of improper payments. In addition, we note that while Treasury and the ROC were similar in that both sought to address improper and potentially fraudulent payments, there are differences in the particular types of challenges addressed by both entities. For instance, as part of its mission, DNP scrutinizes various data sources at the preaward, prepayment, payment, and postpayment stages and analyzes them for indications of potential improper payments and fraud. It does this regularly and on a large scale, matching up to $2.5 trillion in payments each year. DNP’s primary tools for doing this include batch matching payment information to various excluded parties and other “bad-actor” lists, and conducting analysis on payment files to examine irregularities, such as duplicates or the same unique identifier associated with different names. The ROC also used data-matching techniques to identify risk, but it generally applied this technique to issues other than payment data, such as assisting law-enforcement investigations to identify instances when several entities were collaborating to commit fraud. Treasury officials have noted that the DATA Act did not grant Treasury the same authorities that the Recovery Board had to support law-enforcement efforts. See figure 6 for a summary of DNP and ROC key activities. While Treasury has taken some steps to transfer expertise to DNP, it may be missing an opportunity to transfer other information and documentation from the ROC to DNP. In May 2014, Recovery Board officials provided Treasury with a transition plan that outlined its assets, including the data sets used by the ROC. The plan indicated that the Recovery Board had used MOUs with the federal agency owning certain data sets to arrange access to the data. We note that some of the ROC’s documentation—particularly the MOUs that it had to develop to gain access to certain data sets—represent expertise that may be transferred to supplement DNP’s resources and help support its mission; Treasury might benefit by reconsidering its decision not to assume some of these assets. For example, Treasury is responsible for developing MOUs for data sharing with original source agencies and periodically reviewing the MOUs to determine whether the terms are sufficient. The development of data-sharing agreements is difficult and time-consuming. The Recovery Board maintains information of potential use to Treasury in this regard—namely, it currently retains copies of all MOUs between the Recovery Board and the original source agency. Some information will be archived or destroyed when the Recovery Board sunsets. These documents may provide Treasury with a template for future data-sharing agreements—for instance, by providing language on how data might be shared, secured, used, and disposed of that the agency owning the data found acceptable. In addition, as part of procuring the software to develop the ROC’s analytic capabilities, Recovery Board staff worked with the General Services Administration to draft requests for proposals that included technical specifications for the software. This documentation along with other guidance or technical information that the ROC developed or retained could serve as templates as DNP expands its capabilities over time. Standards for Internal Controls in the Federal Government provides guidance on the importance of managers achieving results through effective stewardship of public resources. Taking advantage of the opening created by the DATA Act to expand its data-analysis capabilities by transferring the expertise gained through the operation of the ROC could assist DNP in its mission to reduce improper payments. In addition, documenting the rationale for any future decisions on transferring information and documentation would ensure transparency and would be consistent with Standards for Internal Controls in the Federal Government guidance on recording and communicating information, including to external stakeholders that may have a significant effect on the agency achieving its goals. The ROC provided analytic services to its clients, including many OIGs, in support of their audits and investigations supporting fraud prevention and detection. As part of an oversight mission, these entities are often required to improve the efficiency and accountability of federal spending and address fraud, waste, and abuse. However, because Treasury currently does not plan to transfer the assets of the ROC, the center’s users will need to consider alternatives when the Recovery Board closes. Some large OIGs that previously used the ROC told us that they intend to develop their own analytic capabilities. For instance, Department of Homeland Security OIG officials said they hired analysts familiar with link analysis and the relevant software as well as rebidding contracts in an attempt to replicate some of the resources currently offered at the ROC. Expanding the analytic capabilities of OIGs could help to strengthen the rigor of oversight, allow OIGs to develop tools that are the most useful for their portfolios, and broaden the types of audit and investigative activities OIGs can undertake. However, it is unknown whether OIGs developing these capacities on their own may lead to potential duplication and fragmentation as well as whether the expansion of these capabilities across many entities would offer the same level of expertise and efficiency that OIG officials obtained from the ROC. In addition, such an expansion could also be duplicative if each OIG purchased the same types of software and support resources, which may not be the most efficient use of federal funds. While OIGs with the financial resources to do so may pursue replication of the ROC’s tools, the ROC’s termination may have more effect on the audit and investigative capabilities of some small- and medium-sized OIGs that do not have the resources to develop independent data analytics or pay fees for a similar service, according to some OIG officials. According to these officials, the loss of the ROC’s analytical capabilities could also result in auditors and investigators working much longer to research the same types of linkages as opposed to verifying the information that the ROC could provide more efficiently and in a shorter time frame. According to CIGIE officials, maintaining a centralized data-analytics center like the ROC might help reduce unnecessary duplication of effort across the OIG community, and help ensure that all OIGs continue to have these resources at their disposal, especially the small- to medium- sized offices that do not have the funding to obtain separate capabilities. Established by the Inspector General Act of 2008, CIGIE currently provides oversight resources and guidance to the OIG community and has taken steps to expand the analytic capabilities of the OIGs. For instance, CIGIE developed a virtual platform that allows OIG community members to both contribute and use shared resources such as algorithms, best practices, models, and support documentation. While these resources are helpful, the ROC provided more advanced, customized data-analytics services to the OIGs, and also allowed them to leverage ROC software that otherwise would not have been available. In 2013, CIGIE explored the viability of assuming some of the ROC’s assets as a way to provide some additional analytic capabilities to the OIG community. At the time, CIGIE estimated that it would cost about $10 million per year to continue to operate the ROC. Because CIGIE is primarily funded by membership dues, CIGIE determined the additional cost to operate the ROC would be too burdensome for the organization. A CIGIE official indicated the organization has continued to look for opportunities to provide centralized data-analytic resources to OIGs. However, this official said that, given its financial resources (about $6.5 million in operating funds in fiscal year 2016), even if CIGIE were able to do so, this capability would be at a significantly scaled-back level compared to the ROC. Through the DATA Act, Congress provided Treasury the option to transfer the ROC’s assets. The act specifically identifies improving the efficiency and transparency of federal spending and the reduction and prevention of improper payments as functions of a data analysis center or expanded service, if Treasury chose to establish or expand one. In this regard, the Chairmen and Ranking Members of the Senate Committee on Homeland Security and Governmental Affairs and the House Committee on Oversight and Governmental Reform wrote a joint letter to the Secretary of the Treasury in July 2015 expressing their concern that the ROC’s powerful analytical capabilities would be lost at the end of the fiscal year, and underscored their interest in preserving these capabilities. In highlighting the ROC’s evolution since the Recovery Act to assume multiple roles in improving efficiencies in federal spending, the committees stressed that the ROC’s various data-analytics capabilities are essential to detect and prevent fraud and reduce improper payments. Specifically, the committees noted that federal agencies agree fraud detection and prevention could be significantly improved with greater access to data and analytical tools, such as those provided by the ROC. A legislative proposal that explicitly articulates the relative costs and benefits of developing an analytics center with a mission and capabilities similar to the ROC could help Congress decide whether to authorize and fund such an entity. Given its close connection to the oversight community, and the research it has already undertaken pertaining to the ROC, CIGIE is a logical entity to develop that proposal. If it were to do so, CIGIE could identify and recommend the resources needed—particularly in terms of employees and technology—to establish a ROC-like entity under its auspices. A proposal might also outline the data-analytic services that center could offer the OIG community and the potential results those services might provide. In addition, a proposal could outline any additional authorities, such as the ability to handle law-enforcement- sensitive data—that Treasury noted was a barrier for DNP to provide similar services to the ROC. That element of the proposal would help ensure such a new entity would effectively support the oversight community in matters related to law enforcement. By creating a legislative proposal, CIGIE could thus present Congress the detailed information Congress would need to make an informed decision about the merits of creating a CIGIE-led data-analytics center. CIGIE officials stated that they have not developed such a proposal absent specific direction from Congress, but these officials expressed concerns about the effect of the September 30, 2015, sunset of the Recovery Board on the OIG community and, as noted above, have sought options within their current budget to increase analytic resources available to OIGs. Further, CIGIE officials stated that with Congress’s support they could develop such a proposal, which would be intended to (1) expand analytic resources for the oversight community and (2) help refine the tools Congress and the oversight community use to address improper payments and fraud, waste, and abuse. Agencies seeking to address improper payments and fraud, waste, and abuse face challenging prospects, especially in an environment in which estimated improper payments rose by $19 billion to $124.7 billion in fiscal year 2014. To help address such challenges, agencies need sophisticated capabilities to help narrow the window of opportunity for improper payments, including fraud. Such capabilities include data- analytic tools such as those that permit Treasury to perform large-scale analysis of payment data at DNP, as well as the ROC’s link analysis and unstructured-text-mining tools that identify and target risk and that, in conjunction with the Recovery Board’s investigative authority, aid the government in preventing and reducing improper payments. Although cost and other challenges may limit the viability of transferring certain of the ROC’s assets to Treasury, other assets—especially information and documentation that could serve as templates for data sharing or developing the technical specifications for procuring additional software— may assist DNP as it expands its services and capabilities to address improper payments. The ROC’s May 2014 transition plan may serve as a basis for Treasury to further assess whether certain data sets could be of assistance to DNP, and documentation of MOUs could help Treasury more quickly replicate such arrangements. Such action by Treasury will not ultimately prevent some loss of capabilities for the oversight community as DNP and the ROC generally serve different communities of users and deploy their analytic tools to address different types of problems. Thus, maintaining a separate centralized form of analytic and investigative support for the oversight community would help prevent OIGs from losing valuable tools useful for targeting oversight resources in a data-driven, risk-based manner. In addition, a centralized analytics resource could help prevent a potentially inefficient use of funds that could result if OIGs proceeded to duplicate similar oversight tools upon the loss of the ROC. Further, a centralized analytics resource could help maintain high-quality analyses by ensuring regular use of those tools and expertise. Given that congressional oversight committees have shown substantial interest in the ROC’s capabilities, recognized its value in helping combat fraud, waste, and abuse in federal spending, and demonstrated intent in preserving this value for its users, a legislative proposal could begin the process of reestablishing a ROC-like capability to help OIGs sustain their oversight of federal expenditures. To help preserve a proven resource supporting the oversight community’s analytic capabilities, Congress may wish to consider directing CIGIE to develop a legislative proposal to reconstitute the essential capabilities of the ROC to help ensure federal spending accountability. The proposal should identify a range of options at varying scales for the cost of analytic tools, personnel, and necessary funding, as well as any additional authority CIGIE may need to ensure such enduring, robust analytical and investigative capability for the oversight community. To capitalize on the opportunity created by the DATA Act, we recommend that the Secretary of the Treasury reconsider whether certain assets— especially information and documentation such as MOUs that would help transfer the knowledge gained through the operation of the ROC—could be worth transferring to DNP to assist in its mission to reduce improper payments. Additionally, the Secretary should document the decision on whether Treasury transfers additional information and documentation and what factors were considered in this decision. We provided Treasury, CIGIE, and the Recovery Board with a draft of this report for review and comment. Treasury and CIGIE provided written comments, and the Recovery Board did not provide official comments on our draft report. In its written comments, which are reproduced in appendix II, Treasury concurred with our recommendation that it should consider additional knowledge transfers from the ROC to assist in the DNP’s mission to reduce improper payments and will document its rationale and final decision in this regard. In its response, Treasury noted that it has taken steps to preserve the knowledge gained through the operation of the ROC, including hiring ROC personnel. Furthermore, Treasury noted that it has a robust program in place that is meeting the needs of federal agencies in preventing, reducing, and recovering improper payments including DNP and the Philadelphia Financial Center. In its written comments, which are reproduced in appendix III, CIGIE agreed that the ROC provided valuable assistance to many OIGs in support of their investigative, audit, evaluation, and inspection efforts and this support will be missed when the Recovery Board closes. In its response, CIGIE noted that the OIG community has long recognized the importance of using a variety of techniques, including data analysis, to assist in its oversight responsibility and there may be efficiencies achieved in the development of analytics capabilities by CIGIE that could support the entire OIG community. CIGIE stated that it has already undertaken steps to develop an array of scalable options for such data- analytic capabilities with appropriate regard to both the costs and benefits of such options and the current needs of the OIG community. However, to expand these efforts, it would need additional resources to develop and maintain data-analytic activities. CIGIE also stated that it is essential that CIGIE have a steady stream of funding for it to develop and maintain any kind of data-analysis function. Treasury, the Recovery Board, and CIGIE also provided technical comments that were incorporated into the report, as appropriate. We are sending copies of this report to relevant congressional committees, the Secretary of the Treasury, the Chair of the Council of the Inspectors General on Integrity and Efficiency (CIGIE), and the Chair of the Recovery Accountability and Transparency Board (Recovery Board). This report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6722 or bagdoyans@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To determine the analytic value of the Recovery Operations Center’s (ROC) assets and capabilities to the oversight community, we interviewed Recovery Accountability and Transparency Board (Recovery Board) officials who worked on ROC operations. We also obtained documentation on the types of data sets, analytic tools, and other capabilities the ROC offers to its primary users—namely, the oversight community, which included the Offices of Inspector General (OIG) but also other government entities tasked with ensuring the appropriate use of federal funds, such as law-enforcement agencies and sometimes agency programs. To gather information on how the oversight community used the ROC’s assets, we developed criteria to select ROC users to interview based on agency size and, based on an analysis of client- service data from fiscal years 2014 through March 2015, including the frequency and consistency with which the organizations used the ROC. On the basis of these criteria, we interviewed officials from the OIGs of the Department of Homeland Security, the Appalachian Regional Commission, the Environmental Protection Agency, the Department of Housing and Urban Development, the Export-Import Bank, the National Science Foundation, the United States Postal Service, and the Department of Justice, as well as officials from the National Intellectual Property Rights Coordination Center. To examine the Department of the Treasury’s (Treasury) plans for a transfer of ROC assets, we interviewed Treasury officials responsible for making decisions on transferring the ROC’s assets as well as Recovery Board officials for their observations on transition activities undertaken by Treasury. We reviewed relevant transition-plan documents developed by the Recovery Board that included milestones and guidelines for the transition of ROC assets, which they provided to Treasury. We also reviewed documentation from the Recovery Board on the ROC’s resources, including hardware, software contracts, data sets, and human capital, as well as information on its staffing levels over time, to develop a complete picture of the capabilities that Treasury could obtain through a transition. To evaluate the effect of the ROC’s capabilities on its audit and investigative user communities, we reviewed documentation from the Recovery Board on the ROC’s outcomes, how its clients made use of its resources, and what they achieved. We also conducted interviews with OIGs to understand under what circumstances they used the ROC’s assets. We analyzed these interviews, characterizing themes that were similar and different based on the size of the OIG, the frequency with which the OIG used the ROC, and whether the OIG has any in-house data-analytics capabilities. We also discussed plans the OIGs had to replace any ROC capabilities should Treasury opt not to assume all of the ROC’s assets. We also interviewed the Council of the Inspectors General on Integrity and Efficiency (CIGIE) officials to obtain their perspectives on how the closure of the ROC may affect the oversight community. We also reviewed their budget information and their estimate of the cost of ROC resources and budget information. We did not verify the accuracy of CIGIE’s estimate of the cost of ROC resources and did not conduct an analysis of whether CIGIE’s budget appears to be sufficient for covering these costs. In addition to the contact mentioned above, the following staff members made significant contributions to this report: Joah Iannotta, Assistant Director; Lauren Kirkpatrick, Analyst-in-Charge; Giny Cheong; Beryl Davis; Peter Del Toro; Kathleen Drennan; Vijay D’Souza; Colin Fallon; Shirley Hwang; Maria McMullen; Paula Rascona; Brynn Rovito; and Andrew Stephens. DATA Act: Progress Made in Initial Implementation but Challenges Must be Addressed as Efforts Proceed. GAO-15-752T. Washington, D.C.: July 29, 2015.
Improper payments government-wide increased approximately $19 billion in fiscal year 2014, resulting in an estimated total of $124.7 billion. The DATA Act authorized Treasury to establish a data-analysis center or expand an existing service. Congress included a provision in the DATA Act for GAO to review the implementation of the statute. This report addresses (1) the value of the ROC's capabilities provided to the oversight community; (2) Treasury's plans for transferring assets from the ROC, and (3) the potential effect, if any, of Treasury's plans on the ROC's users. GAO reviewed documentation on the ROC's assets, a transition plan developed by the ROC, and its performance data from fiscal year 2012 through March 2015. On the basis of factors such as frequency of requests for assistance and agency size, GAO interviewed various ROC users about their views. GAO also interviewed Treasury and CIGIE officials to obtain their perspectives on the ROC's capabilities and its future status. The Recovery Accountability and Transparency Board's (Recovery Board) Recovery Operations Center (ROC) provided significant analytical services primarily to Offices of the Inspector General (OIG) to support antifraud and other activities. Congress initially established the Recovery Board to oversee funds appropriated by the American Recovery and Reinvestment Act of 2009. Subsequently, it expanded the Recovery Board's mandate to include oversight of other federal spending, and most recently through the Digital Accountability and Transparency Act of 2014 (DATA Act) authorized the Department of the Treasury (Treasury) to transfer ROC assets to Treasury by September 30, 2015, when the Recovery Board closes. On the basis of the ROC's client-service performance data that GAO reviewed, the center researched roughly 1.7 million entities associated with $36.4 billion in federal funds in fiscal years 2012 and 2013. The ROC developed specialized data-analytic capabilities that, among other things, helped OIGs identify high-risk entities and target audit and investigative resources to those entities; identified organizations with previous fraudulent activities that nevertheless received contracts during Hurricane Sandy; and identified entities involved in activities such as collaborating to commit fraud, and visually depicted relationships among these entities for juries. Treasury does not plan to transfer the ROC's assets, such as hardware and software, citing cost, lack of investigative authority, and other reasons. However, Treasury could transfer additional information to its Do Not Pay Center Business Center (DNP), which assists agencies in preventing improper payments. For instance, transferring documentation of data-sharing agreements, which can be difficult and time-consuming to establish, could serve as a template for DNP efforts to expand the number of data sets it uses to identify improper payments. Although cost and other challenges may limit the viability of transferring certain of the ROC's assets to Treasury, other assets—especially those that could serve as templates for negotiating access to and procuring additional data—may assist DNP as it expands its services and capabilities. Because Treasury does not plan to transfer the ROC's assets, the ROC's users will need to consider alternatives when the Recovery Board closes. Specifically, officials from some large OIGs that have used the ROC told GAO they intend to develop their own analytical capabilities. However, officials from some small- and medium-sized OIGs said they do not have the resources to develop independent data analytics or pay for a similar service, thus foregoing the ROC's capabilities. The Council of the Inspectors General for Integrity and Efficiency (CIGIE) could reconstitute some of the ROC's analytic capabilities and has explored options to do so. However, CIGIE officials stated that CIGIE does not currently have the resources to accomplish this reconstitution. A legislative proposal that articulates for Congress the relative costs and benefits of developing an entity with a mission and capabilities similar to the ROC could be an appropriate first step in preserving the essence of the center's proven value to its users. CIGIE officials stated that they have not developed such a proposal absent congressional direction, but noted that they support Congress's expressed interest in preserving and expanding analytic resources for the oversight community. If Congress wants to maintain the ROC's analytic capabilities, it should consider directing CIGIE to develop a proposal to that effect to help ensure federal spending accountability. GAO also recommends that Treasury consider transferring additional information to enhance Treasury's DNP. Treasury concurred with GAO's recommendation, and CIGIE is supportive of assuming additional analytical functions for the OIG community with additional funding.
In February 2011, Boeing won the competition to develop the Air Force’s next generation aerial refueling tanker aircraft, the KC-46. This program is one of a few weapon system programs to use a fixed price incentive (firm target) contract for development in recent years. Defense officials stated that a fixed price incentive (firm target) contract was appropriate for the program because KC-46 development is considered to be a relatively low-risk effort to integrate mostly mature military technologies onto an aircraft designed for commercial use. The KC-46 development contract is designed to hold Boeing accountable for cost associated with the development of four test aircraft and includes options to manufacture the remaining production lots. The contract limits the government’s financial liability and provides the contractor incentives to reduce costs in order to earn more profit. Barring any changes to KC-46 requirements by the Air Force, the contract specifies a target price of $4.4 billion and a ceiling price of $4.9 billion, at which point Boeing must assume responsibility for all additional costs. We previously reported that both the program office and Boeing have estimated that development costs would exceed the contract ceiling price. As of March 2014, Boeing and the program office estimated costs would be over the ceiling price by about $271 million and $787 million, respectively. The program office estimate is higher because it includes additional costs associated with performance as well as cost and schedule risk. In all, 13 production lots are expected to be delivered. The contract includes firm fixed price contract options for the first production lot in 2015 and the second production lot in 2016, and options with not-to-exceed firm fixed prices for production lots 3 through 13. The contract also requires Boeing to deliver 18 operational aircraft by August 2017. In addition, all required training must be complete, and the required support equipment and sustainment support in place by August 2017. Contract provisions also specify that Boeing must correct any required deficiencies and bring development and production aircraft to the final configuration at no additional cost to the government. After the first two production lots, the program plans to produce aircraft at a rate of 15 aircraft per year, with the final 6 aircraft procured in fiscal year 2027. Separate competitions may occur for later acquisitions, nominally called the KC-Y and KC-Z, to replace the rest of the KC-135 fleet and the KC-10 fleet (the Air Force’s large tanker). Boeing plans to modify the 767 aircraft in two phases to produce a militarized aerial refueling tanker: In the first, Boeing is modifying the 767 with a cargo door and an advanced flight deck display borrowed from its new 787 and calling this modified version the 767-2C. The 767-2C will be built on Boeing’s existing production line. In the second, the 767-2C will proceed to the finishing center to become a KC-46. It will be militarized by adding air refueling capabilities, an air refueling operator’s station that includes panoramic three-dimensional displays, and threat detection and avoidance systems. The Federal Aviation Administration (FAA) has previously certified Boeing’s 767 commercial passenger airplane and will certify the design for both the 767-2C and the KC-46. Boeing established plans for the FAA to accomplish the 767-2C and the KC-46 certifications concurrently rather than consecutively, which is the typical procedure. The Air Force also has to certify the KC-46 and will use the FAA’s findings to make the overall airworthiness determination. See Figure 1 for a depiction of the conversion of the 767 aircraft into the KC-46 tanker. The new KC-46 tanker is expected to be more capable than the KC-135 it replaces in several respects. Unlike the KC-135, it will allow for two types of refueling to be employed in the same mission—a refueling boom that is integrated with a computer assisted control system, as well as a permanent hose and drogue refueling system. The KC-135 has to land and switch equipment to transition from one mode to another. Also, the KC-46 is expected to be able to refuel in a variety of night-time and covert mission settings and will have countermeasures to protect it against infrared missile threats. The KC-135 is restricted in tactical missions and does not have sufficient defensive systems relative to the KC-46. Designed with more refueling capacity, improved efficiency, and increased cargo and medical evacuation capabilities than its predecessor, the KC-46 is intended to provide aerial refueling to Air Force, Navy, Marine Corps, and allied aircraft. Appendix II compares, in more detail, the current capabilities of the KC-135 with the planned capabilities of the new KC-46 tanker. KC-46 total program acquisition costs (development, production, and military construction costs) have remained relatively stable since program start, changing less than 1 percent since February 2011, and the program is meeting schedule and performance goals. Boeing set aside $354 million in contract funds to address identified, but unresolved development risks. As of December 2013, Boeing had about $75 million remaining to address these risks. Based on Boeing’s monthly usage, we calculate that the management reserves will be depleted about 3 months before the KC-46’s first flight and approximately 3 years before the development contract is completed. The government, however, would bear no financial risk for future work if Boeing uses all of its management reserves as long as the Air Force does not make changes to the KC-46 requirements, schedule, or other relevant terms and conditions of the contract. Our prior work has found that flight testing is likely to uncover problems that will require management reserves to address. The KC-46 total acquisition cost estimate has remained relatively stable since February 2011 although there have been some minor fluctuations among the development, procurement, and military construction costs that make up this estimate. The largest change is in the program’s development cost estimate, which has decreased by about $345 million, or about 5 percent. Development cost reductions can be attributed to fiscal year 2013 sequestration cuts, support for DOD’s Small Business Innovative Research fund, and cuts to a fund dedicated to tanker replacement. According to program officials, these reductions have not affected the program because it had set aside funds to address engineering changes, which have not occurred thus far. Overall, total acquisition and unit costs have decreased less than 1 percent and quantities have remained the same. Table 1 summarizes the initial and current estimated quantities, costs, and milestone dates for the KC-46 program. The October 2013 development cost estimate of about $6.8 billion includes several contracts for various activities. For example, the program office awarded Boeing a contract for $4.9 billion to develop 4 test aircraft and budgeted over $0.3 billion for the development of aircrew and maintenance training systems. An estimated $1.6 billion is needed to cover other government costs, such as program office support, test and evaluation support, contract performance risk, and other development risks associated with the aircraft and training systems. The procurement cost estimate of $40.3 billion is to procure 175 production aircraft, initial spares, and other support equipment. The military construction estimate of $4.2 billion includes the projected costs to build aircraft hangars, maintenance and supply shops, and other facilities to house and support the KC-46 fleet at 10 main operating bases, 1 training base, and the Oklahoma City Air Logistics Complex depot. Boeing is also meeting the high level schedule milestones. Most recently, it conducted the critical design review (CDR) in July 2013, on schedule. However, there are indications that the start of initial operational test and evaluation, which is scheduled for May 2016, may slip. DOD’s Office of the Director, Operational Test and Evaluation, which is responsible for approving operational and live fire test and evaluation within each major defense acquisition program, recently issued its 2013 annual report and continued to recommend that the Air Force plan for a 6- to 12-month delay to the start of initial operational test and evaluation to allow more time to train aircrew and maintenance personnel and verify maintenance procedures. The KC-46 program office agrees that the test schedule is aggressive, but does not believe the delays are certain. The program office projects that the KC-46 aircraft will meet the requirements of all nine key performance parameters by the end of development. Satisfying these key performance parameters will ensure that the KC-46 will be able to accomplish its primary mission of providing worldwide, day and night, adverse weather aerial refueling as well as its secondary missions. See appendix III for a list of the KC-46 key performance parameters. The program office has developed a set of metrics to help gauge its progress towards meeting the performance parameters. For example, one metric tracks operational empty weight because in general, every pound of excess weight equates to a corresponding reduction in the amount of fuel the aircraft can carry to accomplish its primary mission. Boeing currently projects that the aircraft will meet the weight target of 204,000 pounds. At the outset of development, Boeing set aside $354 million from contract funds in a management reserve account, about 7 percent of the contract ceiling price, to address identified, yet unresolved, development risks. Last year we reported that Boeing had accomplished approximately 28 percent of the development work and had allocated about 80 percent of the contract’s management reserves. We raised concerns about the high rate at which the management reserves were being used because doing so early in a program is often an indicator of future contract performance problems. Since then, there have been two major actions related to management reserves in 2013. First, in January 2013, Boeing returned $72 million to the management reserves account because program officials determined that the program would pay for fuel for test flights rather than Boeing, new labor rates were lower than planned, and Boeing calculated costs associated with some types of labor incorrectly. Second, in August 2013, Boeing allocated about $42 million of its management reserves, with the largest portion, $24 million, used for a wet fuels laboratory. Boeing initially planned on using corporate funding for the wet fuels laboratory, which was intended for general wet fuels research. However, since the laboratory became more focused on meeting the specific needs of the KC-46 program, Boeing determined it was more appropriate to use management reserves. The other $18 million was used for a variety of other efforts, including minor design and architectural changes. The following figure illustrates management reserve allocation since program start and projects when reserves will be depleted. As of December 2013, about $75 million in unallocated reserves remain. If the current usage trend continues—a monthly average of over $9 million—the program office projects management reserves will be depleted in September 2014, about 3 months before the start of KC-46 developmental flight testing and approximately three years before the development contract is completed. According to GAO’s Cost Estimating and Assessment Guide, significant use of management reserves early in a program may indicate contract performance problems and decreases the amount of reserves available for future risks, particularly during the test and evaluation phase when demand may be the greatest. Barring any changes to KC-46 requirements, schedule, or other relevant terms and conditions of the contract by the Air Force, Boeing would be solely responsible for the cost of future changes if it uses all of its management reserves, so the government bears no financial risk. The program office and Boeing held the program’s CDR in July 2013 and released over 90 percent of the total engineering design drawings, a key indicator that the design is stable. The program is now focused on completing software development and integration, as well as test plans in preparation for developmental flight testing. Software development plans changed over the course of the past year in large part because the program solidified requirements at CDR and Boeing brought two of the program’s software intensive system components in-house and found ways to use some of its existing software. Overall, software development is progressing largely according to plan; however, software verification testing has not yet started and software problem reports are increasing. The flight test program is also a concern because it depends on coordination among several separate government entities, requires timely access to receiver aircraft (the aircraft the KC-46 will refuel while in flight), and requires a more aggressive pace than on past programs. The program office is conducting a series of rehearsal test exercises and is working with Air Force officials to finalize agreements related to receiver aircraft availability to mitigate these risks. The program office held its CDR in July 2013, with Boeing releasing over 90 percent of the total engineering design drawings. The 90 percent drawing release met a contractual requirement and is consistent with acquisition best practices that use this metric as an indicator that the design is stable. According to program officials, as of December 2013, Boeing had released 98.6 percent of the expected engineering design drawings and the remaining drawings relate almost exclusively to aircraft interiors and are not considered to be complex. Figure 3 shows the number of design drawings completed since Boeing began tracking it in May 2011. Prior to CDR, the program office and Boeing took a number of steps to ensure the program had a stable design. This included holding a series of sub-system CDRs, replacing two system components that were not sufficiently mature, and addressing previously identified risks, such as aircraft weight. Currently, Boeing is working to alleviate lingering instability in key physical components related to aerial refueling—the centerline drogue system and wing aerial refueling pod. Boeing still considers the instability of these components to be a moderate program risk, and its strategy is to conduct modeling and simulation studies and perform ground tests to help mitigate this risk. As of January 2014, Boeing estimates that 15.8 million lines of code will be needed for the KC-46. Boeing plans to rely primarily on reused software from its commercial aircraft for the 767-2C and more heavily on modified or new software for the military subsystems on the KC-46. As shown in table 2, the most recent plan is for Boeing to reuse existing software for 83 percent of its software needs, which has helped reduce risks associated with software development. According to program officials, the changes in reused, modified, and new software between 2011 and 2013 are largely the result of the program solidifying requirements for CDR and Boeing’s effort to reduce the risk associated with the development of two software-intensive system components related to situational awareness. According to these officials, there were limitations with the original software developer’s software and Boeing ultimately decided to bring the development effort in-house, leveraging existing software code to mitigate risk. Overall, we found that software development is currently progressing mostly according to Boeing’s plan. As shown in figure 4, as of January 2014, Boeing reported that 73 percent of software had been delivered compared to its plan for having 76 percent at this time (96 percent of the planned activities). A large portion of the software that has been delivered to this point is reused software that is needed for the initial build of the 767-2C aircraft. A small amount of development work related to the aerial refueling software, about 3 percent, is behind schedule. The remaining software, related to key military subsystems for remote vision and situational awareness, among other capabilities, is expected to be delivered to Boeing through the beginning of June 2014. While the program’s progress for software development is encouraging, program officials are expecting software verification testing, which has not yet begun, to be challenging. Notably, Boeing must verify the software code to determine if it works as intended. Approximately 735,000 lines of the code are new and relate in large part to key military unique systems. Moreover, Boeing’s software integration lab that simulates the KC-46 cockpit will be at near capacity between February and June 2014. Boeing could have difficulty completing all testing if more retests are needed than expected. In addition to capacity concerns, we found that software problem reports are increasing. There were over 600 software problem reports as of January 2014 that needed to be addressed, which will add pressure to an integration lab already operating at near capacity. Thirty-five percent of the problem reports were considered urgent or high priority problems that need to be fixed as quickly as possible. Program officials stated that avionics flight management computer software has been a major contributor to the problem reports to date and that Boeing is working closely with this supplier to ensure problems are addressed. This particular supplier has recently increased the number of staff working on this software effort from 3 to 24 people to address the backlog of problem reports. The program’s flight test schedule continues to be a concern due to the need for extensive coordination among government entities, the need for timely access to receiver aircraft, and its aggressive pace. The following is a summary of the various testing concerns and the steps, if any, the program office and Boeing are taking to address them. Coordinating on concurrent test activities: Government agencies and Boeing have agreed to a “test once” approach, whereby many of the test activities for FAA certification, developmental testing, aerial refueling, and operational testing will be combined to achieve greater efficiency. Currently, Boeing, the program office, the Air Force, Navy, FAA, and officials from the Office of the Secretary of Defense organizations for developmental and operational testing are finalizing detailed test plans, which are needed to guide flight test activities that are scheduled to begin in June 2014 for the 767-2C and in January 2015 for the KC-46. The program office is conducting a series of rehearsal test exercises before any flight tests take place to ensure that all parties understand their roles and responsibilities during testing. Program officials report that three of four such exercises have been completed, with the next scheduled for September 2014. Officials said this exercise will focus on preparing for the KC-46’s first flight. Ensuring receiver aircraft availability: To meet the test schedule, receiver aircraft, such as the F-22 A and the F/A-18 C, are needed at certain locations and times to participate in the program’s test activities. The program office has finalized one memorandum of agreement with Air Force officials for access to 14 receiver aircraft and stated that it is currently in the process of developing two additional agreements with the Navy for two additional types of aircraft and the United Kingdom for their respective aircraft. If the receiver aircraft are not available when needed, the Air Force risks affecting Boeing’s test schedule. Maintaining flight test pace: The program office and Boeing report that maintaining the program’s flight test pace is among the program’s greatest risks. Program officials explained that this risk captures both the 65 hour per month commercial test pace for the 767-2C aircraft and the 50 hour per month military test pace for the KC-46 aircraft. To adhere to the aggressive test schedule, Boeing officials stated that they plan to fly development aircraft 5 to 6 days per week with roughly 5 to 6 hours per mission (which DOD test organizations have shown is more aggressive for the military flight testing than other programs have demonstrated historically). Boeing officials believe they can achieve the test pace required because of Boeing’s testing experience with other commercial aircraft and the KC-10 tanker program. In addition, Boeing has local maintenance and engineering support available to support the test program as well as control over flight test priorities for the commercial testing since the development aircraft are being tested at Boeing facilities. The program has made progress in readying the KC-46 for low rate initial production in 2015. Boeing has started manufacturing all four development aircraft on schedule, but has experienced some delays with the first aircraft. The program office and Boeing have also taken several steps to capture the necessary manufacturing knowledge to make informed decisions as the program transitions from design into production. This includes identifying and assessing critical manufacturing processes to determine if they are capable of producing key military subsystems in a production representative environment. The program also established a reliability growth curve and Boeing will begin tracking its progress towards reaching reliability goals once testing begins. Boeing is making progress manufacturing most of the military unique subsystems, but a test article for a critical aerial refueling subsystem has been delayed by almost a year due to parts issues. Boeing has started manufacturing all four development aircraft on schedule, but has experienced some delays with the first aircraft. The Air Force plans to eventually field a total of 179 aircraft no later than January 2031. Figure 5 displays the time line for the manufacture of the development, low rate production, and full rate production aircraft. Boeing began producing the first development aircraft (a 767-2C) in June 2013, and Boeing officials said the aircraft was 76 percent complete as of mid January 2014. The aircraft was scheduled to be powered on for the first time in early December 2013, but program officials told us that activity has slipped until the end of April 2014. Boeing officials attributed the schedule slip to late supplier deliveries. Completion of major assembly operations has also slipped from mid January until mid March. Program officials told us that Boeing has been able to resequence tasks thus far to avoid affecting the critical path, such as adding the body fuel tanks to the first 767-2C earlier and in a different facility than originally planned. Program officials are assessing whether these delays will affect the timing of the first flight of the 767-2C, scheduled for June 2014. Boeing and program officials said that manufacturing of the second development aircraft was going better than on the first aircraft, reporting that the aircraft was 65 percent complete as of mid January 2014. Officials added that there had been a 75 percent reduction in overall parts shortages. The third and fourth aircraft just began production in late October 2013 and mid January 2014, respectively. From the first to the fourth development aircraft, Boeing is anticipating improvement in its ability to manufacture the aircraft. For example, the first aircraft is scheduled to take about 11 and a half months from the start of major assembly until first flight while the fourth aircraft is only scheduled to take about 7 months. Once complete, the four development aircraft will then enter the finishing center at various points between June 2014 and September 2015 to be converted to a KC-46 tanker. The program office and Boeing have taken several initial steps to help ensure that the KC-46 will be ready for low rate production in August 2015 and that the aircraft will be reliable. In our prior work, we identified the activities required to capture manufacturing knowledge. These activities include (1) identifying key system characteristics and critical manufacturing processes; (2) establishing a reliability growth plan and goals; (3) conducting failure modes and effects analysis; (4) conducting reliability growth testing; and (5) determining whether processes are in control and capable. Table 3 provides a description of these activities and progress the program has made for each. Since the 767-2C will be manufactured on Boeing’s existing 767 production line, the program office and Boeing have focused their attention on identifying the key system characteristics and critical manufacturing processes for the military unique subsystems. Prior to CDR, the program office and Boeing completed assessments of 12 critical manufacturing processes, such as the assembly of aerial refueling components. These assessments indicated that key military subsystems could be manufactured in a production representative environment. The program office and Boeing plan on conducting another assessment prior to August 2015 to determine if the program is ready to begin low rate initial production. The program office has established a reliability growth curve and goal. To assess reliability growth, the program is tracking the mean time between unscheduled maintenance events due to equipment failure, which is defined as the total flight hours divided by the total number of incidents requiring unscheduled maintenance. These failures are caused by a manufacturing or design defect and require the use of Air Force resources, such as spare parts or manpower, in order to fix them. The program has set a reliability goal of 2.83 flight hours between unscheduled maintenance events, but does not expect that goal to be achieved until the program has logged 50,000 flight hours. Figure 6 below depicts how the program office expects the aircraft’s reliability to improve over the program’s initial 5,000 flight hours. The program expects to be above the idealized reliability growth curve at the start of testing because initial testing will be on a 767-2C, a derivative of a commercial aircraft that has been flying since the 1980s. Reliability is projected to fall below expectations once the military sub-systems are added to the aircraft. The program then expects the reliability to steadily improve to the point where the aircraft could fly about 2 hours between unscheduled maintenance events at the start of initial operational test and evaluation. As shown in figure 6 above, the program will be on the idealized reliability growth curve at that point. Boeing has also initiated a failure modes and effects analysis that covers 41 subsystems. Boeing and the program office rely on this analysis to determine which subsystems on the aircraft are likely to fail, when and why they fail, and whether those subsystems’ failures might threaten the aircraft’s safety. Boeing is also using this information to develop a tool to detect and log equipment failures. The program office plans to share the analysis with aircraft maintenance staff. The program has not yet begun two critical manufacturing and reliability assessment activities. First, the program is not currently tracking reliability growth because the 767-2C first flight is not scheduled to take place until June 2014 and no flight hours have been accrued yet. Second, the program has not determined whether manufacturing processes are in control and capable of producing parts consistently with few defects. The program plans to review and verify that process controls are in place to ensure the quality of the manufacturing process as part of its next assessment of critical manufacturing processes prior to the low rate production decision in August 2015. Program officials said their review would be focused on whether these process controls are in place rather than analyzing the data to determine if the processes are actually in control. Boeing is making progress manufacturing most of the military unique subsystems, such as the aerial refueling operator station, but the test refueling boom’s schedule has slipped by almost a year due to parts delays. Boeing’s original design included parts that proved challenging to fit within the boom’s space constraints, and other parts were redesigned to improve the boom’s safety. Boom parts suppliers, however, have experienced delays in delivering the redesigned parts to Boeing, which has prompted Boeing to send staff to help one of the suppliers minimize further schedule slips. Boeing officials told us they decided to build a test boom as a risk reduction effort and plan to apply lessons learned from producing the test boom to future boom production. However, program officials currently estimate that boom parts delays have also led to an approximately 1-month schedule slip in the first development aircraft’s boom. Boeing is facing some schedule pressure on this boom because it is now scheduled to be completed only a few days before the start of ground vibration testing. Boeing officials said they needed the boom for this testing and would like to complete ground vibration testing before the 767-2C’s first flight. The second development aircraft’s boom is scheduled to be built in only 5 months. Based on its current schedule, Boeing needs to have this boom completed by June 2014 in order to meet the KC-46’s first flight, scheduled for January 2015. The KC-46 program has made good progress to date—acquisition costs have remained relatively stable, high-level schedule and performance goals have been met, the critical design review was successfully completed, and the contractor is building development aircraft. The next 12 months will be challenging as the program must accomplish a significant amount of work and the margin for error is small. For example, the program is scheduled to complete software integration and the first test flights of the 767-2C and KC-46. The remaining software development and integration work is mostly focused on military software and systems and is expected to be more difficult relative to the prior work completed. The program’s test activities continue to be a concern due to its aggressive test schedule. Detailed test plans must be completed and the program must maintain an unusually high test pace to meet this schedule. Perhaps more importantly, agencies will have to coordinate to concurrently complete multiple air worthiness certifications. While efficient, this approach presents significant risk to the program. The program office must also finalize agreements now in progress to ensure that receiver aircraft are available when and where they are needed to support flight tests. Any discoveries made in testing that require design changes may negatively affect program schedule and delivery to the warfighter. Parts delays on the first development aircraft and a critical aerial refueling subsystem are also causing increased schedule pressure. With these risks in its near future, the KC-46 program will continue to bear watching. While all of the risks currently appear to be recognized, any slips in software testing, flight testing, and manufacturing as the program moves forward could cause delays in the program. Due to existing schedule risks and the fact that the program is entering a challenging phase of testing, we recommend that the Secretary of Defense direct the Air Force to study the likelihood and potential effect of delays on total development costs, and develop mitigation plans, as needed, related to potential delays. DOD provided us with written comments on a draft of this report, which are reprinted in appendix V. DOD concurred with our recommendation. The KC-46 program office conducts an annual analysis of cost and schedule risks to quantify the potential effect of delays on program costs and officials told us they will consider the risks we identified in that analysis. We also incorporated technical comments from DOD as appropriate. We are sending copies of this report to the Secretary of Defense; the Secretary of the Air Force; and the Director of the Office of Management and Budget. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions concerning this report, please contact me at (202) 512-4841 or sullivanm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff contributing to this report are listed in appendix VI. This report examines the Air Force’s continued development of the KC-46 tanker program. Specifically, we examined (1) progress toward cost, schedule, and performance goals; (2) development challenges, if any, and steps to address them; and (3) progress in manufacturing the aircraft. To assess progress toward cost, schedule and performance goals in the calendar year of this review (2013), we reviewed briefings by program and contractor officials, financial management documents, program budgets, defense acquisition executive summary reports, selected acquisition reports, monthly activity reports, technical performance indicators, risk assessments, and other documentation. To evaluate cost information, we analyzed earned value management data and the contractor’s use of management reserves. To assess development schedule progress, we compared program milestones established at program start to current estimates and reviewed Defense Contract Management Agency monthly assessments of KC-46 schedule health and program office schedule analyses. We also interviewed program officials to determine the status of Department of Defense (DOD) efforts to implement our prior recommendations aimed at improving the program’s integrated master schedule. To measure progress toward performance goals, we reviewed current estimates of key performance parameters, key system attributes, and technical performance metrics and compared them to threshold and objective requirements. We discussed results of the initial KC-46 operational assessment with officials from the Air Force Operational Test and Evaluation Center and the Director of Operational Test and Evaluation. We also interviewed relevant officials from the KC-46 program office, Boeing, and the Department of Defense. To assess development challenges and steps to address them, we examined program documentation, such as critical design review briefings, risk assessments and briefings, software metrics reports, integrated test team meeting minutes, and updates to key documents such as the technology maturation, software development, and integrated test plans. We also analyzed pertinent DOD documents including the Defense Contract Management Agency’s monthly program assessment reports, the first operational assessment by the Air Force Operational Test and Evaluation Center, and annual reports issued by the Deputy Assistant Secretary of Defense for Developmental Test and Evaluation and the Director of Operational Test and Evaluation. When possible, we attended integrated test team and program management meetings to obtain additional insight on any challenges or mitigation efforts being discussed by Boeing and program officials. In addition, we examined the program’s progress in completing design drawings and maturing critical technologies at the critical design review. Furthermore, we interviewed officials from Boeing, the program office, the Office of the Secretary of Defense, and the Department of the Navy to assess development challenges and the suitability of steps taken to address them. To assess progress in manufacturing aircraft, we analyzed program office and Boeing documents, such as the manufacturing program plan; quarterly manufacturing and quality briefings; and program schedules. We used these documents to compare Boeing’s initial schedule for completing aircraft and boom manufacturing to its actual performance and to identify challenges, if any. We also evaluated whether the program captured manufacturing knowledge recommended in prior GAO best practices work. This included reviewing manufacturing readiness assessments and comparing the results and future plans to DOD guidance and manufacturing best practices identified in prior GAO work. Lastly, we interviewed Boeing and program officials to discuss manufacturing progress and challenges and conducted a site visit of Boeing’s 767 production line and its temporary and permanent boom production facility and finishing center. We conducted this performance audit from May 2013 to April 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Description of Key Performance Parameters Description Aircraft shall be capable of accomplishing air refueling of all Department of Defense current and programmed (budgeted) receiver aircraft. The aircraft shall be capable of conducting both boom and drogue air refueling on the same mission. Aircraft shall be capable of carrying certain amounts of fuel (to use in air refueling) certain distances. Operate in Civil and Military Airspace Aircraft shall be capable of worldwide flight operations in all civil and military airspace. Aircraft shall be capable of transporting certain amounts of both equipment and personnel. Aircraft shall be capable of receiving air refueling from any compatible tanker aircraft. Aircraft shall be able to operate in chemical and biological environments. Aircraft must be able to have effective information exchanges with many other Department of Defense systems to fully support execution of all necessary missions and activities. Aircraft shall be capable of operating in hostile threat environments. Aircraft shall be capable of conducting drogue refueling on multiple aircraft on the same mission. Appendix IV: KC-46 Critical Technology Elements Description The display screens at boom operator stations inside the aircraft provide the visual cues needed for the operator to monitor the aircraft being refueled before and after contact with the refueling boom or drogue. The images of the aircraft on the screens are captured by a pair of cameras outside the aircraft that are meant to replicate the binocular aspect of human vision by supplying an image from two separate points of view, replicating how humans see two points of view, one for each eye. The resulting image separation provides the boom operator with greater fidelity and a more realistic impression of depth, or a 3rd dimension. Testing to date Similar technology has been used on two foreign-operated refueling aircraft and a representative model in tests with other Boeing tankers. The route generation engine is a component of the reactive threat avoidance sub-system. This sub-system monitors for ground and surface threats based on the aircraft’s location and the active flight route. It identifies threats that impact the current route, provides a safer alternative route, and alerts the pilot that a new route is available for review and acceptance. A recent version of the route generation engine was flown and demonstrated on a Navy aircraft, but improvements have been made that have not been flight tested. In addition to the contact name above, the following staff members made key contributions to this report: Cheryl Andrew, Assistant Director; Jeff Hartnett; Katheryn Hubbell; John Krump; LeAnna Parkey; and Robert Swierczek.
Aerial refueling allows U.S. military aircraft to fly farther, stay airborne longer, and transport more weapons, equipment, and supplies. Yet the mainstay of the U.S. tanker forces—the KC-135 Stratotanker—is over 50 years old. It is increasingly costly to support and its age-related problems could potentially ground the fleet. As a result, the Air Force initiated the $51 billion KC-46 program to replace the aerial refueling fleet. The program plans to produce 18 tankers by 2017 and 179 aircraft in total. The National Defense Authorization Act for Fiscal Year 2012 mandated GAO to annually review the KC-46 program through 2017. This report addresses (1) progress made in 2013 toward cost, schedule, and performance goals, (2) development challenges, if any, and steps to address them, and (3) progress made in manufacturing the aircraft. To do this, GAO reviewed key program documents and discussed development and production plans and results with officials from the KC-46 program office, other defense offices, and the prime contractor, Boeing. The KC-46 program has made good progress over the past year—acquisition costs have remained relatively stable, the critical design review was successfully completed, the program is on track to meet performance parameters, and the contractor started building development aircraft. As shown, total program acquisition costs—which include development, production, and military construction costs—and unit costs have changed less than 1 percent since February 2011. As of December 2013, Boeing had about $75 million of its management reserves remaining to address identified, but unresolved development risks. There are indications that the start of initial operational test and evaluation, which is scheduled for May 2016, may slip 6 to 12 months. According to the Director of Operational Test and Evaluation, more time may be needed to train aircrew and maintenance personnel and verify maintenance procedures. The program released over 90 percent of the KC-46 design drawings at the critical design review, indicating that the design is stable. Overall, development of about 15.8 million lines of software code is progressing mostly according to plan. The next 12 months will be challenging as the program must complete software development, verify that the software works as intended, finalize developmental flight test planning, and begin developmental flight tests. Software problem reports are increasing and Boeing could have difficulty completing all testing if more retests are needed than expected. Developmental flight testing activities are also a concern due to the need for extensive coordination among government agencies, the need for timely access to receiver aircraft (aircraft the KC-46 will refuel while in flight), and the aggressive test pace. The program office is conducting test exercises to mitigate risks and working with Navy and United Kingdom officials to finalize agreements to have access to necessary receiver aircraft. The program has also made progress in ensuring that the KC-46 is ready for low rate initial production in 2015. Boeing has started manufacturing all four development aircraft on schedule. The program office has identified its critical manufacturing processes and verified that the processes are capable of producing key military subsystems in a production representative environment. In addition, the program has established a reliability growth curve and will begin tracking its progress towards reaching reliability goals once testing begins. Boeing is experiencing some manufacturing delays due to late supplier deliveries on the first aircraft and parts delays for a test article of a critical aerial refueling subsystem, but the program has not missed any major milestones. GAO recommends that the Air Force determine the likelihood and potential effect of delays on total development costs, and develop mitigation plans, as needed, related to potential delays. DOD concurred with the recommendation.
Under the Mineral Leasing Act (30 U.S.C. 181 et seq., as amended) (MLA), revenues for federal onshore minerals, which include bonuses, rents, and royalties, are distributed as follows: 50 percent to the state in which the production occurred, 10 percent to the general treasury, and 40 percent to the reclamation fund. Lands leased under other laws have different distribution requirements. In fiscal year 1996, 41 states received a total of about $481 million in revenues from the development of federal onshore minerals. Wyoming, New Mexico, and California received about $206 million, $124 million, and $28 million, respectively. Wyoming, New Mexico, and California also manage mineral development on private and state-owned lands. In these states, revenues from state-owned land are used to fund public educational institutions. Wyoming’s bonus, rental, and royalty revenues from minerals on state-owned land in fiscal year 1996 were $29 million. In New Mexico, these revenues from minerals on state land were $115 million. California’s revenues from state-owned minerals onshore were $3 million. In 1991, with the passage of the Department of the Interior’s appropriation bill, states receiving revenues from federal onshore minerals began paying a portion of the costs to administer the onshore minerals leasing laws—a practice known as “net receipts sharing.” Net receipts sharing became permanent with the passage of the Omnibus Budget Reconciliation Act of 1993 (OBRA), which effectively requires that the federal government recover from the states about 25 percent of the prior year’s federal appropriations allocated to minerals leasing activities. (See app. I for a detailed description of net receipts sharing.) In general, managing federal and state minerals includes some level of resource planning and use authorization, compliance inspections, revenue collection, and auditing. Resource planning may include identifying areas with a potential for mineral resources; planning for future mineral development and how that development will affect other resources on the land (such as recreation, livestock grazing, and wildlife); and geophysical exploration by potential lessees. Use authorization includes lease issuance and the approval of post-leasing activities—including the drilling of oil and gas wells and the extraction of other mineral resources—and such associated activities as the construction of roads, facilities, pipelines, storage tanks, and modifications to operations. Once approved and under way, these operations may be inspected periodically to determine whether they comply with applicable laws, regulations, and lease terms. The revenues from mineral leasing and information about production are collected and may be audited. The federal government allocated $14.6 million of its appropriations for minerals management to Wyoming, New Mexico, and California for fiscal year 1996. This amount, which will be deducted from the states’ 1997 revenue payments, was computed on the basis of allocations of the appropriations for all onshore leasable minerals management activities conducted by the Forest Service, BLM, and MMS—the three key agencies responsible for administering the federal onshore minerals leasing laws. Table 1 shows the fiscal year 1996 net receipts-sharing deductions for Wyoming, New Mexico, and California and the portions attributable to the Forest Service, BLM, and MMS. The Forest Service manages mineral uses occurring in national forests, which includes determining whether forest areas are suitable for leasing, participating with BLM in making leasing decisions for forest land, and managing mineral operations on forest land. These activities are required under several federal laws, including (1) the National Forest Management Act of 1976, which prescribes forest planning processes; (2) the National Environmental Policy Act of 1969 (NEPA), which requires environmental analysis and documentation; and (3) the Federal Onshore Oil and Gas Leasing Reform Act of 1987, which authorized the Secretary of Agriculture to determine which Forest Service lands could be leased for mineral development and to specify the conditions placed on mineral leases. Likewise, BLM manages surface uses and makes leasing decisions on BLM-managed land. BLM also issues leases and manages operations for oil, gas, coal, and other minerals (1) on lands with split ownership, namely where the minerals are federally owned but the surface is not, and (2) on certain lands managed by other federal agencies. BLM is also responsible for performing inspections to verify the quantity of minerals produced on federal leases. In addition to MLA, major federal laws governing BLM’s management of onshore minerals include (1) the Federal Land Policy Management Act of 1976, which gave BLM general management responsibilities for public land, endorsed multiple-use management, and prescribed a planning process similar to the Forest Service’s; (2) NEPA; (3) the Federal Onshore Oil and Gas Leasing Reform Act; (4) the Federal Coal Leasing Amendments Act of 1976; and (5) the Federal Oil and Gas Royalty Management Act of 1982 (FOGRMA), which was enacted to ensure that the Secretary of the Interior properly accounts for all oil and gas from public lands. MMS collects, audits, and disburses most mineral revenues from production on federal lands. In support of these functions, the agency maintains information on leases and royalty payers. MMS also collects and compares royalty and production information reported by payers and operators. Finally, MMS audits payments received from selected royalty payers. As with some of BLM’s minerals management activities, MMS’ functions stem from requirements in FOGRMA. In fiscal year 1996, Wyoming’s onshore minerals management program cost $2.0 million, New Mexico’s cost $7.2 million, and California’s cost $9.9 million. All three states lease state-owned land within their boundaries for minerals development. Each of the three states has a land office responsible for leasing and for collecting revenues from those leases. The states also have regulatory agencies that oversee mineral operations within their boundaries, including those on state and private land, and where applicable, on federal and other land. Appendix II includes a more detailed description of the three states’ mineral programs. Table 2 shows the costs for the states’ minerals management programs. As land managers, the states’ land offices serve some similar functions for state land as the Forest Service and BLM do for federal land. The states’ land offices decide how state land will be used and issue leases for mineral development. As royalty managers, they perform most of the same functions as MMS does for federal royalties. They collect and account for mineral revenues, including bonuses, rents, and royalties, and audit these payments. As BLM does for federal lands, the states’ regulatory agencies review and approve drilling and extraction permits and operations; inspect operations for compliance with safety, environmental, and operational requirements; and verify and compile data on reported production on state-owned lands. The state regulatory agencies are also authorized to inspect operations for compliance with safety and environmental standards on private land within the state. The agencies are mandated by state laws to perform other minerals management activities on federal, state, private, and other lands. These activities include making spacing determinations, reviewing and approving discharge plans for oil fields, witnessing surface casing and well-plugging, and inspecting and permitting waste disposal for commercial facilities. Because of differences between federal and state programs, the states’ costs for these programs cannot be meaningfully compared. Current laws require the Forest Service and BLM to create land-use plans that evaluate alternative resource uses—including minerals—on federally managed lands. These plans must include public involvement and may be appealed to the agency or challenged in court. The three states we reviewed do not have similar land-use planning processes, and neither Wyoming nor New Mexico has similar requirements for environmental analysis to those for the federal land-managing agencies. In responding to a draft of this report, officials from California’s State Lands Commission commented that the California Environmental Quality Act and other state laws require the protection of the environment, which includes developing environmental information and mitigation requirements; protecting significant environmental values on state lands; and balancing public needs in approving the uses of state lands. A New Mexico state official noted that mineral development in that state does not occur at the expense of archaeological or environmental concerns. Federal law also requires certain royalty management activities that are different from state activities. For example, FOGRMA requires the Secretary of the Interior to have a strategy for inspecting oil and gas operations to ensure that all production is reported. This strategy includes inspections of equipment, specific measurement of oil and production, and site security procedures. In contrast, the states rely primarily upon comparisons of royalty and production reports to verify production amounts rather than on field inspections. (See app. II for more details on the states’ activities.) Other differences are state-specific. For example, federal land in Wyoming contains over twice as many producing coal leases than does state land. By law, BLM must perform an economic evaluation of coal for leasing but not for oil and gas leasing. The scope of the regulatory agencies’ responsibilities also differs from that of the federal program, as these agencies regulate mineral development on state, private, and in some cases, federal and other land. In their response to a draft of this report, officials in California’s Division of Oil, Gas, and Geothermal Resources commented that its regulatory scope is unique among the states, as about 95 percent of its workload involves administering laws and regulations on private and granted lands. We provided the Department of the Interior, the Forest Service, BLM, and MMS with a draft of this report. Wyoming’s State Land and Farm Loan Office and Oil and Gas Conservation Commission, New Mexico’s State Land Office and Oil Conservation Division, and California’s State Lands Commission and Conservation Department’s Division of Oil, Gas, and Geothermal Resources were also provided with a draft of this report. In written comments, the Department of the Interior and MMS generally agreed with the contents of the report. (See app. IV.) BLM provided us with technical clarifications, which we have incorporated as appropriate, and also suggested that we include information on the states’ mining regulatory agencies. However, we did not include this information because we focused on activities comparable to the federal leasable minerals program (for which net receipts sharing is computed), which does not include all mining-related activities. The Forest Service had no comments on the draft. In written comments, Wyoming’s Office of the Governor acknowledged that the federal and state mineral leasing programs are different, but disagreed with our position that the costs cannot be meaningfully compared. (See app. V.) The Governor’s Office commented that a comparison could be made that includes an analysis of the similarities and differences in the programs. Our analysis shows that because of such differences in the programs as land-use planning, environmental, and production verification requirements, a cost comparison would not be meaningful. The Governor’s Office also requested that we expand our report to provide a breakdown of the federal program’s direct and indirect costs by function. However, our report discusses the federal minerals management program from the perspective of net receipts sharing, which is based upon appropriations and not on actual program costs. Accordingly, we describe how the appropriations are allocated but do not provide actual costs; such a discussion would be outside the scope of this report. Furthermore, we believe that regardless of the level of cost detail provided, a comparison between federal costs and state costs would not be meaningful because of the differences in the programs. The Office of the Governor’s comments included comments and technical clarifications from Wyoming’s Oil and Gas Conservation Commission, State Land and Farm Loan Office, and Department of Audit, which we incorporated as appropriate. In commenting on this report, New Mexico’s Oil Conservation Division (for written comments, see app. VI) stated that the states’ regulatory agencies are responsible for minerals management activities beyond the management of state-owned minerals. We adjusted the text of our report to clarify the role of the regulatory agencies in managing state, private, and where applicable, federal and other lands. Furthermore, the Oil Conservation Division commented that many of the net receipts-sharing costs are not justifiable; however, such an assessment was outside the scope of our review. In written comments, California’s State Lands Commission commented that the draft was generally a fair and accurate review of California’s minerals management costs. (See app. VII.) However, Commission officials commented that our reporting of the Division of Oil, Gas, and Geothermal Resources’ costs overstated the cost of managing state lands. We adjusted the text of our report to clarify that the regulatory agencies’ scope of authority extends beyond state lands in all three states and that about 95 percent of California’s Division of Oil, Gas, and Geothermal Resources’ time is devoted to regulating the development of minerals on privately owned and other land. The Commission also commented that it is responsible for implementing the California Environmental Quality Act and is required to develop environmental information and mitigation requirements. Furthermore, it commented that state law requires the Commission to protect significant environmental values on state lands and to balance public needs in approving the uses of state lands. We incorporated this information into the text of this report. The Commission also commented that it has a program of inspections and other audit procedures to verify production amounts and royalty payments that is more extensive than we had described in the draft. We incorporated specific recommended changes into our discussion of California’s minerals management program in appendix II. California’s Division of Oil, Gas, and Geothermal Resources provided technical clarifications, which we also incorporated into the report as appropriate. In conducting our review, we examined relevant reports and other documents prepared by the three federal agencies within the Departments of Agriculture and the Interior that are responsible for (1) managing federal onshore leasable minerals and (2) allocating their appropriations among the states for net receipts sharing. We interviewed program managers and budget officials from these organizations in Washington, D.C., and in regional, state, and local offices, as appropriate. We also obtained cost data and estimates from officials in Wyoming, New Mexico, and California. We interviewed the officials responsible for compiling the cost data and discussed the functions of their agencies and how they compare with the federal program. We conducted our review from June through November 1996 in accordance with generally accepted government auditing standards. A full description of our objectives, scope, and methodology is included in appendix III. As requested, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days after the date of this letter. At that time, we will send copies to appropriate congressional committees, federal agencies, state agencies, and other interested parties. We will also make copies available to others upon request. Please call me at (202) 512-9775 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix VIII. Under the Mineral Leasing Act (30 U.S.C. 181 et seq., as amended), states generally receive 50 percent of the revenues from federal onshore mineral leases, which include bonuses, rents, and royalties. Under the act, onshore federal mineral receipts are distributed as follows: 10 percent goes to the general treasury, 40 percent to the reclamation fund, and 50 percent to the state in which the production occurred. Lands leased under other laws have different distribution requirements. With the passage of the Department of the Interior’s 1991 appropriation bill, the federal government began recovering a portion of the costs to administer the federal onshore minerals leasing laws from the revenues generated—a practice now known as “net receipts sharing.” The 1993 Omnibus Budget Reconciliation Act (OBRA) made net receipts sharing permanent. The agencies whose appropriations are included in the net receipts-sharing calculations are the Department of the Interior’s Bureau of Land Management (BLM) and Minerals Management Service (MMS) and the Department of Agriculture’s Forest Service. OBRA requires that 50 percent of the preceding fiscal year’s appropriations to administer minerals leasing laws be deducted from the mineral revenues from federal lands before they are distributed among the states, the general treasury, and the reclamation fund. As a result, the states bear the cost associated with about 25 percent of the appropriations. To illustrate, if one year’s appropriation were $100, OBRA requires that 50 percent of that appropriation, or $50, be recovered from the revenues in the following year. If the lands were leased under the Mineral Leasing Act, the $50 would be recovered as follows: $25 comes from the states receiving mineral revenues, $5 from the general treasury, and $20 from the reclamation fund. Although MMS is responsible for deducting the amounts from each state’s revenues, the deductions also include amounts for the Forest Service and BLM. The Forest Service and BLM compute and report their allocations to MMS, which then calculates the total amount to be deducted from each state’s revenues. The following sections explain how the Forest Service, BLM, and MMS compute their allocations and how MMS combines the allocations of all three agencies to compute the actual deduction from state revenues for the management of the federal onshore minerals leasing program. For its portion of the net receipts-sharing deduction, the Forest Service calculates and allocates the actual cost of its minerals management program. At the end of each fiscal year, the Forest Service identifies the amounts charged to the minerals management program for each forest and totals these amounts by state to determine each state’s minerals management costs. The Forest Service’s fiscal year 1996 leasable minerals management costs for Wyoming included those for the Bighorn, Shoshone, Bridger-Teton, and Medicine Bow National Forests. The Forest Service’s leasable minerals costs for New Mexico included those for the Carson, Cibola, Gila, Lincoln, and Santa Fe National Forests. The Forest Service’s costs for California included the Angeles, Eldorado, Inyo, Klamath, Lassen, Los Padres, Mendocino, Modoc, Stanislaus, and Tahoe National Forests. The Forest Service adds a percentage to these direct costs for indirect expenses. In fiscal years 1995 and 1996, the Forest Service added 20 percent to the leasable minerals costs for program support and common services, including those provided by the regional and headquarters offices. For Wyoming, New Mexico, and California, the Forest Service’s allocation for the fiscal year 1996 net receipts-sharing computation was about $552,000, $234,000, and $517,000 respectively. For its part of the net receipts-sharing process, BLM allocates its onshore minerals management appropriations to each state. Each BLM state office receives an energy and minerals budget, which includes all funds dedicated to the management of onshore oil, gas, geothermal, and other mineral resources on federally managed lands. From these amounts, BLM subtracts appropriated amounts not specifically related to federal onshore leasable minerals, such as costs to manage Indian minerals and other, nonleasable minerals. To these state office budgets, BLM adds a factor for indirect expenses. In fiscal year 1996, BLM added 19 percent to the energy and minerals appropriations to cover the expense of general administration and information management. For Wyoming, New Mexico, and California, BLM’s allocation for the net receipts-sharing computation was about $19 million, $13 million, and $5 million, respectively. To determine the share of its budget related to onshore activities, MMS begins with the budget for the Royalty Management Program (RMP), which is responsible for managing revenues from federal mineral leasing, both onshore and offshore. Each RMP division identifies the amount of its budget that is related to managing onshore, offshore, and Indian revenues on the basis of workload factors. Then, RMP allocates the federal onshore amount to the states, again, on the basis of workload factors, such as the number of producing leases in the state as a percentage of the total number of federal onshore producing leases. For Wyoming, New Mexico, and California, MMS’s allocation for the net receipts-sharing computation was about $8 million, $10 million, and $3 million, respectively. After the Forest Service, BLM, and MMS have identified the amounts to be allocated for onshore leasable minerals management, MMS calculates the final deduction for each state as follows: 1. MMS divides the sum of the agencies’ allocations in half as required by OBRA. The sum of the Forest Service’s, BLM’s, and MMS’ allocations for fiscal year 1996 was almost $114 million. One-half of this amount was $57 million. 2. The resulting amount ($57 million) is allocated among the states on the basis of each state’s proportion of total revenues for that fiscal year. For example, Wyoming received about 43 percent of the federal onshore leasable mineral revenues in fiscal year 1996. To compute the revenue-based allocation, MMS multiplied $57 million by 43 percent, which resulted in an allocation of about $24 million for Wyoming. 3. However, under OBRA, the allocation to each state cannot exceed one-half of the estimated amount that the agencies attributed to that state. For fiscal year 1996, the total amount that the agencies attributed to Wyoming was about $28 million, which is the sum of the Forest Service’s, BLM’s, and MMS’ allocations to the state. One-half of the $28 million is about $14 million. 4. The lower amount is deducted according to each state’s revenue-distribution formula in the following fiscal year. Because Wyoming receives one-half of the federal mineral receipts, it is charged one-half of this lower amount ($14 million). Thus, Wyoming’s total deduction in fiscal year 1997 will be about $7 million. For all but two states—Wyoming and New Mexico—the allocation based upon each state’s proportion of total revenues resulted in the lower deduction for fiscal year 1996. Table I.1 shows the fiscal year 1996 revenues and net receipts-sharing deductions (which will be deducted in fiscal year 1997) for the states. Final deduction from fiscal year 1997 revenues (continued) Officials in Wyoming, New Mexico, and California described their minerals management programs and provided us with actual and estimated costs of operating these programs. Wyoming receives revenues from the production of oil, gas, coal, and other minerals in the state. In fiscal year 1996, Wyoming received $30 million from production on state lands and $206 million from federal royalties, rents, bonuses, and other revenues. Almost 4 million acres of state-owned land in Wyoming contain 816 producing mineral leases, compared with 5,632 producing leases on more than 27 million acres of Forest Service- and BLM-managed land. Wyoming’s State Land and Farm Loan Office’s Mineral Leasing and Royalty Compliance Division issues leases on state lands for mineral development and collects, verifies, and processes royalty payments and payment information. The Division’s activities are guided by the agency’s mission of optimizing economic return from state lands in the interest of the state’s schools and institutions. The Division’s total costs for fiscal year 1996 were about $750,000. The Mineral Leasing Section’s resource-planning activities do not include formal land-use planning activities similar to those required of federal agencies. Instead, they focus on compatibility of mineral operations with other surface uses. State Land and Farm Loan Office officials estimate that direct costs for resource planning were about $29,000 in fiscal year 1996. The Mineral Leasing Section issues leases for mineral development on state land. Although it has no formal procedure for environmental analysis, the Mineral Leasing Section may place restrictions on leases if necessary to protect the public, the environment, cultural or archaeological resources, or threatened and endangered species. Another agency, the Oil and Gas Conservation Commission, reviews and approves “applications for permit to drill” and other requests for permission to operate on state lands. However, the Mineral Leasing Section records these permits and monitors the status of operations on state land. The Section maintains information about lease assignments, transfers, and units and communitization agreements. The Section’s estimated use authorization costs in fiscal year 1996 were just over $131,000. State Land Office staff do not routinely perform compliance inspections, although the Office has budgeted to hire contractors for some site inspections. State Land Office staff may inspect a previously producing operation if it suddenly reports no production, and work with other state and federal officials to protect state lands from being drained. Costs for inspection-related activities in fiscal year 1996 were an estimated $44,000. Mineral Leasing and Royalty Compliance Division staff maintain and verify data on leases, payers, and royalties. The staff receive and process royalty information, which includes volume and product value information for each well. They also receive, account for, and process royalty payments. Auditing is limited mainly to desk reviews of reported sales data, which include verification that information contained in royalty reports is supported by other source documents. These activities cost the State Land Office an estimated $415,000 in fiscal year 1996. The State Land Office may also be involved in appeals to the Wyoming Board of Land Commissioners, coordination of settlements, and assessments of penalties, and it continually works to develop computer systems for royalty management. These along with administrative and other support activities make up the balance of the Division’s costs for fiscal year 1996. Wyoming’s Oil and Gas Conservation Commission is the state’s oil and gas regulatory agency. The Commission’s activities include permitting geophysical exploration; approving operators’ requests to develop minerals on state, federal, and private leases; inspecting those leases for compliance with operating requirements; and collecting and maintaining production data for all wells in the state. The Commission also administers the Environmental Protection Agency’s (EPA’s) Underground Injection Control program. The Commission is funded through a mill levy tax on all oil and gas production in the state; it also receives a grant from EPA. The Commission’s reported costs for fiscal year 1996 were about $1.58 million. The Commission’s resource-planning activities include both limited land-use planning and permitting of geophysical exploration. Land-use planning focuses on the proximity of proposed oil and gas operations to sensitive areas, such as houses or water wells, and creeks, drainages, rivers, or wetlands. The Commission may require operators to line fluid pits, use a closed system to prevent contamination of these areas, or move the proposed operation. The Commission also works jointly with BLM to approve seismic exploration on state, federal, and private land. Commission officials estimate that these resource-planning activities cost about $175,000 in fiscal year 1996. The Commission’s use authorization activities include establishing minimum distances between oil and gas wells and reviewing and approving proposals to operate on state, federal, and private land. As part of its enforcement of Wyoming’s oil and gas conservation laws, the Commission establishes well-spacing requirements that apply to all wells in the state. The Commission also receives and reviews applications for permit to drill on all state and private lands in the state and reviews and approves units and communitization agreements. These use authorization activities cost an estimated $480,000 in fiscal year 1996. The Commission’s five inspection staff inspect oil and gas wells in response to environmental concerns or resource waste. The staff inspect such things as (1) blowout-preventer equipment, (2) general oil field conditions, (3) well-plugging operations, (4) dry holes on state and private lands to ensure that they are properly plugged, and (5) operations for compliance with surface requirements; they also respond to landowners’ complaints. The Commission does not perform production accountability inspections in the same way that BLM does; inspectors do not usually strap tanks, gauge meters, or witness transfers of oil, unless they suspect that theft has occurred. The Commission spent an estimated $436,000 on compliance inspections in fiscal year 1996. The Commission receives data on production and wells for all wells in the state and maintains a database of the information that is available to Wyoming’s Department of Revenue and the State Land and Farm Loan Office to assist in their audits of royalties and severance taxes. The Commission spent an estimated $218,000 on collecting, verifying, and maintaining information on production and wells in fiscal year 1996. The Commission carries out EPA’s Underground Injection Control program in Wyoming, and has primary responsibility for Class II (noncommercial) injection and enhanced recovery wells on all but Indian-owned lands. Wyoming has almost 6,500 injection wells, and the Commission inspects about 20 percent of the wells per year to make sure the casing is intact to prevent groundwater from being contaminated. The Commission also witnesses the plugging and abandonment of all wells and attends blowout-preventer tests. Its costs for the Underground Injection Control program were about $320,000 in fiscal year 1996. Wyoming’s Department of Audit’s Minerals Audit Division audits revenues from mineral development in the state, including royalties, severance tax, and conservation tax. The Division spends about 5 percent of its time and budget on revenues generated on state lands, and its direct costs for auditing leases on state lands in fiscal year 1996 were about $67,000. New Mexico receives revenues from the production of oil, gas, coal, and other minerals in the state. In fiscal year 1996, the state received a total of $115 million in royalty, rent, and bonus revenues from production on state lands and $124 million in federal royalties, rents, bonuses, and other revenues. About 9.8 million acres of state-owned land in New Mexico contain 5,116 producing mineral leases, compared with 6,160 producing leases on more than 22 million acres of Forest Service- and BLM-managed land. New Mexico’s State Land Office is responsible for leasing state lands for mineral extraction and for collecting and distributing the royalties generated from the production of minerals. The Office’s Oil, Gas, and Minerals Division identifies parcels to be leased, sets the lease terms, and holds lease sales. The Royalty Management Division collects and audits royalties paid for minerals from state lands. The State Land Office’s estimated costs in fiscal year 1996 for managing the mineral program were just over $3 million. The Oil, Gas, and Minerals Division performs resource-planning functions on state trust lands. The Division conducts very limited land-use planning, primarily considering the long-term plans for property that it wants to lease. New Mexico does not require land-use planning nor environmental planning, although the State Land Office determines if endangered species are present on state lands identified for leasing. The Division issues permits for seismic exploration. The State Land Office estimates that resource-planning activities cost $149,000 in fiscal year 1996. Use authorization consists of holding monthly lease sales, reviewing and approving lease assignments and transfers, and reviewing development plans. The State Land Office monitors diligent development by verifying that drilling and production reports show that production is occurring on leases. The Office does not, however, perform physical inspections of sites for the purpose of verifying production quantities. The Office conducts environmental inspections if necessary—if, for example, a leak is reported. It estimates that use authorization and compliance activities cost $366,000 in fiscal year 1996. The State Land Office’s Oil, Gas, and Minerals Division maintains information on leases and agreements and information on payers. The Royalty Compliance Division processes royalty reports and payments, and collects and disburses revenues. The Royalty Compliance Division also compares information on royalties and production and identifies and resolves discrepancies. Oil and gas producers report and pay royalties to the Royalty Management Division monthly on the basis of the volume and price of oil or natural gas produced. The Division reviews the royalty data and evaluates whether the correct royalty was paid. The Division also audits royalty reports to verify that the reported value is correct. The State Land Office estimates that costs for these activities were about $847,000 in fiscal year 1996. Other minerals management activities include the adjudication of appeals; coordination of settlements; litigation support; development of procedures and rules; and system development, implementation, and operation. New Mexico’s Oil and Natural Gas Administration and Revenue Database (ONGARD) is a shared database that includes production, tax, transportation, and royalty information for all oil and gas wells in New Mexico. The database includes information on all state leases and the locations of all 45,000 active wells on federal, Indian, state, and private lands. State officials compare production and transportation reports from the system to verify production amounts reported to the state. According to state officials, this comparison is an important control to ensure that the state receives the correct royalty amounts. Development costs for ONGARD totaled $15 million to $20 million as of July 1996. State Land Office officials estimate that the costs for implementing and operating ONGARD in fiscal year 1996 were about $734,000. New Mexico’s Oil Conservation Division of the Department of Energy, Minerals, and Natural Resources is responsible for regulating oil, gas, carbon dioxide, and geothermal wells on state and private land and in some cases on federal and Indian land. The Division establishes spacing for oil and gas wells in the state and reviews and approves operators’ applications for permission to operate on state and private lands, inspects oil and gas operations, processes production information, and administers EPA’s Underground Injection Control program. The Division’s budget for fiscal year 1996 was about $4.2 million. The Division authorizes uses on state and private lands by reviewing and approving applications for permit to drill and other operator proposals. The Division approves drilling plans before operations can begin on state leases and may place conditions on its approval of drilling plans on all leases; for example, it requires operators to place nets over all fluid pits to keep birds from landing on the oil-soaked water. The Division also reviews and approves abandonment plans for all wells and other facilities. The Oil Conservation Division estimates its fiscal year 1996 costs for these use authorization activities at about $683,000. The Oil Conservation Division requires drainage protection and inspects oil and gas operations to verify that operators are complying with their approved plans and with environmental requirements. The Division is not required by state law to conduct field inspections to verify mineral production quantities. The Division’s fiscal year 1996 costs for drainage protection and operational and environmental inspections are estimated to be $819,000. The Division collects monthly production disposition and well information for each well in the state and makes it available to the oil and gas industry and other state agencies through the ONGARD database; the State Land Office compares it with royalty reports, and the Taxation and Revenue Department compares it with severance tax reports. The Oil Conservation Division also receives volume reports from oil and gas transporters and compares the production amounts with the amounts reported as transported. The Division investigates and attempts to resolve discrepancies. We were not provided with a separate cost estimate for this function. The Division administers EPA’s Underground Injection Control program, in which it has primacy. The Division inspects wells into which water is being injected to ensure that water does not escape into other geologic formations, which could contaminate groundwater. A grant from EPA covers about 10 percent of the Division’s costs to administer the program. California receives revenues from the production of oil, gas, geothermal resources, and other minerals in the state. In fiscal year 1996, the state received about $3 million from onshore mineral production on state landsand $28 million from onshore federal royalties, rents, bonuses, and other revenues. Onshore, California owns over 1.3 million acres of school lands and minerals; these lands contain 13 producing mineral leases, compared with 358 producing leases on almost 38 million acres of Forest Service- and BLM-managed land. California’s State Lands Commission is responsible for leasing revenue-generating lands and collecting revenues for the state and for protecting, preserving, and restoring the natural values of state lands, both onshore and offshore. The Commission evaluates resources on the land; leases state land for mineral development and permits and reviews plans for mineral development on that land; inspects to ensure compliance with laws, regulations, and lease terms; and collects and audits revenues that the mineral development generates. The Commission’s onshore and offshore minerals management costs for fiscal year 1996 totaled about $6 million. The Commission attributes costs of about $390,000 to onshore minerals management. The State Lands Commission’s resource-planning activities include economic evaluation, mineral and geologic work, and reservoir engineering. According to Commission officials, these activities implement planning and environmental requirements imposed by the California Environmental Quality Act and other state laws. The State Lands Commission estimates that its direct costs for onshore and offshore resource-planning activities were about $534,000 in fiscal year 1996. The Commission leases state land for mineral development, both offshore and onshore. Although the Commission is currently issuing leases for navigable stream beds and river land, no offshore leases have been issued since 1968, when the California state legislature instituted a moratorium on offshore leasing because of an offshore oil spill that occurred near Santa Barbara. Despite the leasing moratorium, drilling continues on existing leases under environmental and management control by the Commission. The Commission’s Mineral Resources Management Division reviews and approves drilling and other operation plans on state leases, onshore and offshore. The plans are required to provide for production-monitoring equipment and procedures for the documentation of royalty payments. For offshore development, the Division reviews oil-spill contingency plans. The estimated fiscal year 1996 costs for onshore and offshore use authorization activities were about $824,000. The Commission monitors onshore and offshore operations to ensure diligent development and inspects for compliance with operational and environmental requirements. Because of the environmental sensitivity of operating offshore, the Commission inspects offshore operations at least annually. Inspections involve examining all meters, witnessing every shipment made, and sampling and verifying quality for pricing purposes. The costs for compliance inspections and oil-spill prevention activities both onshore and offshore were estimated to be $925,000 in fiscal year 1996. The Commission maintains information on leases and royalty payers, and verifies royalty statements for value, volume, and quality. The Commission receives monthly reports from mineral operators showing production amounts and estimating royalties due. Commission staff compare this information with quality and pricing information and calculate the amount of royalty that should be paid. The Commission also receives and processes royalty payments, bills for late payments, and disburses royalties to the state general fund. Estimated costs for these activities onshore and offshore in fiscal year 1996 were about $313,000. The Commission’s minerals audits are conducted mainly for the Long Beach operations. The costs for these activities not related to the net-profit-sharing leases were estimated at $1,000 for fiscal year 1996. These and other activities, including appeals adjudication, litigation support, the development of rules, and system operations and development cost an estimated $271,000 in fiscal year 1996. The Department of Conservation’s Division of Oil, Gas, and Geothermal Resources regulates oil, gas, and geothermal resources in California. The Division reviews and approves plans to develop minerals on state and private lands; inspects operations to protect public health and safety; collects and maintains production and well information; and has primary responsibility for administering EPA’s Underground Injection Control program. Officials estimate that 4 percent of the Division’s time is devoted to state-owned land, 1 percent to federally managed land, and the remaining 95 percent to private and granted lands. The Division is funded through a uniform assessment on every barrel of oil and every 10,000 cubic feet of gas produced in California. The Division’s onshore and offshore minerals management costs for fiscal year 1996 totaled about $10 million. The Division attributes about $9.5 million to onshore minerals management—regardless of land ownership. Although the Division is not generally required to perform land-use planning, it reviews counties’ decisions on oil, gas, and mineral exploration and development. The Division is the state’s main source for oil, gas, and geothermal reserve estimates and develops 5-year production forecasts and possible development scenarios. The Division also provides information on the condition of plugged and abandoned wells in areas where future land development will occur and reviews land-development plans for these areas to ensure that wells are properly plugged and abandoned. These resource-planning functions were estimated to cost $150,000 for both onshore and offshore activities in fiscal year 1996. The Division reviews and approves drilling permits, enhanced recovery and rework proposals, and plugging and abandonment plans for all wells in the state. In approving drilling permits, Division staff review well placement so that wells do not drain resources from adjacent leases; operators are required to notify adjacent leaseholders of operations that may affect their leases. Use authorization activities onshore and offshore cost an estimated $2.3 million in fiscal year 1996. Division staff perform field inspections for compliance with operating requirements and monitor leases to determine whether they are being developed diligently. Inspectors are present at blowout-preventer tests and examine the surface area of a lease to verify that the lease and facilities are in order, operations are fenced and signed, pits and sumps are screened to protect wildlife, and there are no leaks from tanks and pipelines. The Division does not normally perform on-site production verification inspections. Compliance inspections and related activities onshore and offshore were estimated to cost $4.5 million in fiscal year 1996. The Division is the state’s repository for well and operations information and receives production reports for all wells in the state monthly and annually. The Division compares annual production reports with monthly reports to check for inconsistencies in reported production. It provides estimates of reserve volumes to counties for their ad valorem tax estimates. The Division also conducts field audits by comparing companies’ run tickets and other source documents with production reports provided to the agency. Production report processing, data resolution, and audit activities were estimated to cost $750,000 in fiscal year 1996. Other activities such as enforcement, appeals adjudication, and legal support, along with systems operations and development costs, are estimated at about $1.1 million in fiscal year 1996. The Division also administers EPA’s Underground Injection Control program. This includes the approval and inspection of all injection wells in the state, including those on federal land. The state receives an annual grant from EPA—about $453,000 in fiscal year 1996—which, according to Division officials, funds about 18 percent of the state’s total cost of the program. In May 1996, we were asked to (1) identify how much Wyoming, New Mexico, and California paid to the federal government for managing minerals on federal lands within their boundaries, (2) identify the costs to the three states for their own minerals management programs, and (3) compare these federal and state program costs. Two of the three states we were asked to include in this study—Wyoming and New Mexico—received the largest state revenue shares from federal mineral onshore leases in fiscal year 1996. The third state we were asked to include—California—provided geographic diversity because it is not in the Rocky Mountain area. California received the fifth largest share of revenues from federal onshore leases in fiscal year 1996. To determine the costs for the three states for federal minerals management, we obtained fiscal year 1996 net receipts-sharing data for the three federal agencies responsible for minerals management activities—the Department of Agriculture’s Forest Service, and the Department of the Interior’s MMS and BLM. We interviewed agency officials responsible for allocating the agencies’ budgets for minerals activities to the states. We also interviewed Forest Service and BLM field staff to discuss the minerals management activities they perform. Specifically, we met with Forest Service officials in Regions 2, 3, and 5, and with BLM officials in the Wyoming, New Mexico, and California State Offices. To determine the costs for the three states’ minerals management programs, we requested and received cost estimates for fiscal year 1996 from the states’ land and conservation offices. Specifically, in Wyoming, we obtained cost data from the Wyoming State Land and Farm Loan Office, the Wyoming Oil and Gas Conservation Commission, and the Wyoming Department of Audit’s Mineral Audit Division. In New Mexico, we obtained data from the State Land Office and from the Oil Conservation Division of the Energy, Minerals, and Natural Resources Department. In California, we obtained data from the State Lands Commission and from the Division of Oil, Gas, and Geothermal Resources of the Department of Conservation. To obtain descriptions of functions associated with these costs, we interviewed officials at each of these offices. Because of key differences in the federal and state programs, a comparison of the programs’ costs would not be meaningful. To assess the differences between the federal and state programs, we reviewed legal and statistical information on each, including federal minerals legislation, state conservation and land laws, and federal and state statistics on mineral activities in each of the three states. The following are GAO’s comments on the Wyoming Office of the Governor’s comments enclosed in a letter dated January 10, 1997. 1. Wyoming’s Office of the Governor acknowledged that the federal and state minerals leasing programs are different but disagreed with our position that the costs cannot be meaningfully compared. The Governor’s Office commented that a comparison could be made that includes an analysis of the similarities and differences in the programs. However, our analysis shows that because of differences in the programs’ land-use planning, environmental, and production verification requirements, as well as state-specific differences, a cost comparison would not be meaningful. 2. The Governor’s Office requested that we expand our report to provide a breakdown of the federal program’s direct and indirect costs by function. However, our report discusses the federal minerals management program from the perspective of net receipts sharing, which is based upon appropriations and not on the program’s actual costs. Accordingly, we describe how the appropriations are allocated but do not provide actual cost breakdowns. To obtain such actual cost breakdowns would require a review of those costs, which is outside the scope of this report. Furthermore, we believe that regardless of the level of cost detail provided, a comparison between federal costs and state costs would not be meaningful because of the differences in the programs described in the report. 3. Wyoming’s Office of the Governor commented that we do not itemize the basis for over $500,000 deducted from Wyoming’s royalty share for the Forest Service. We adjusted the text of appendix I to clarify that the amount referred to in the Governor’s Office’s comments—$552,000— represents the Forest Service’s allocation to Wyoming for its leasable minerals program, which is included in the net receipts-sharing computation and is not the final deduction. As shown in table 1 of the letter, approximately $140,000, which is about 25 percent of the allocation, will actually be deducted from Wyoming’s federal minerals revenues for the Forest Service’s fiscal year 1996 minerals management activities. As we described in appendix I, the basis for the Forest Service’s allocations to the states is the amount charged to the minerals program for each forest; these amounts are totaled for each state to determine each state’s minerals management costs. The Forest Service adds a percentage to these direct costs for indirect expenses which, in fiscal years 1995 and 1996, was 20 percent. The following are GAO’s comments on the New Mexico Oil Conservation Division’s comments enclosed in a letter dated December 19, 1996. 1. New Mexico’s Oil Conservation Division commented that we did not distinguish between minerals management and surface management and the costs associated with each and further commented that many of the costs allocated to the states are not justifiable. We did not distinguish between the costs for minerals management and surface management because our report does not address actual costs for the federal minerals management program; rather, it discusses how appropriations for federal onshore leasable minerals management are allocated among the states. We did not assess whether these costs were “justifiable” because such an assessment is outside the scope of this review. 2. The Division commented that the state programs include many responsibilities that are not mandated under federal laws, such as statewide spacing rules, oil and gas field rules (and exceptions to these rules), discharge plans, and the witnessing of oil-well casing and plugging operations. We revised our report to include additional information about all three states’ minerals management activities. 3. The Division stated that the report leaves one with the impression that federally managed oil and gas programs are intrinsically more expensive than state programs because federal programs are more comprehensive, involving multiple-use management. We did not analyze whether federal programs were “intrinsically more expensive” or less efficient than the states’ programs and did not intend to leave this impression. The following are GAO’s comments on the California State Lands Commission’s comments enclosed in a letter dated December 20, 1996. 1. In written comments and in subsequent discussions, State Lands Commission officials commented that our reporting of the Division of Oil, Gas, and Geothermal Resources’ costs overstated the cost of managing state lands. Commission officials suggested that we clarify that the regulatory agencies’ costs are for managing all lands under its jurisdiction—not just state lands. We adjusted the text of our report to clarify that the regulatory agencies’ scope of authority extends beyond state lands in all three states, stating specifically that about 95 percent of California’s Division of Oil, Gas, and Geothermal Resources’ time is devoted to regulating onshore mineral development on privately owned and other land. 2. In written comments and in subsequent discussions, Commission officials clarified California’s legal requirements for environmental and land-use planning. They commented that the State Lands Commission is responsible for implementing the California Environmental Quality Act and is required to develop environmental information and mitigation requirements and to protect significant environmental values on state lands. We incorporated this information into the text of the report. In written comments, officials stated that the Commission is required to balance public needs in approving the uses of state lands, but in discussing the Commission’s land-use-planning activities, officials agreed that the state land-use-planning processes differ from federal land-use planning. 3. Commission officials commented that the State Lands Commission has a program of inspections and other audit procedures to verify production amounts and royalty payments that is more extensive than our description in the draft. In their specific technical clarifications, they stated that operators are required to submit plans that provide for production- monitoring equipment and procedures for documenting royalty payments. We incorporated the Commission’s specific recommended change into our discussion of California’s minerals management program in appendix II. However, according to Division of Oil, Gas, and Geothermal Resources officials, Division inspectors do not perform production verification inspections because California does not have a severance tax. Because the Division of Oil, Gas, and Geothermal Resources performs the majority of the workload for California’s onshore minerals management program, we did not adjust the text of the report to reflect the Commission’s comment. Jennifer L. Duncan Susan E. Iott Sue Ellen Naiberk Victor S. Rezendes The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed whether the costs borne by Wyoming, New Mexico, and California for managing federal minerals were comparable to these states' own programs, focusing on: (1) how much the three states paid to the federal government for managing minerals on federal lands within their boundaries; (2) the costs to the three states for their own minerals management programs; (3) a comparison of these federal and state program costs; and (4) the activities that are associated with the federal and state programs. GAO found that: (1) in fiscal year (FY) 1996, Wyoming, New Mexico, and California received almost $358 million in revenues from federal onshore leasable minerals and they will pay almost $14.6 million in FY 1997 for a portion of the federal government's FY 1996 onshore mineral leasing program; (2) Wyoming's share of the $14.6 million is $7.02 million, New Mexico's is $5.94 million, and California's is $1.65 million; (3) these amounts were computed on the basis of allocations of the federal appropriations for all activities conducted by the Forest Service, the Bureau of Land Management, and the Minerals Management Service related to managing federal onshore leasable minerals; (4) onshore mineral development on Wyoming's, New Mexico's, and California's state-owned land generated combined royalties, rents, and bonuses of $148 million in FY 1996; (5) the states' combined costs for managing onshore mineral development, which includes development on state and private lands, totalled about $19 million; (6) specifically, the costs for Wyoming's minerals management program were $2.4 million in FY 1996, while New Mexico's were $7.2 million and California's costs were $9.9 million; (7) because of differences between federal and state programs, the states' costs for these programs cannot be meaningfully compared; (8) federal decisions about mineral leasing must involve land-use planning and environmental analysis; (9) the three states GAO reviewed do not have similar land-use planning processes; (10) furthermore, neither Wyoming nor New Mexico requires an environmental analysis similar to that performed by the federal government; (11) according to California State Lands Commission officials, California laws require an environmental analysis and the protection of state lands; (12) other differences are state-specific and can be attributed to a program's size and regulatory scope and number of mineral operations managed; and (13) for example, California's oil and gas conservation agency devotes about 95 percent of its resources to managing mineral development on privately owned land and other lands not owned by the state or federal government.
In 1943, Public Law 78-16 authorized the vocational rehabilitation program to provide training to veterans with service-connected disabilities. Between 1943 and 1980, program features and criteria underwent several legislative changes. In 1980, the Congress enacted the Veterans’ Rehabilitation and Education Amendments (P.L. 96-466), which changed the program’s purpose to providing eligible veterans with services and assistance necessary to enable them to obtain and maintain suitable employment. The vocational rehabilitation process has five phases. In the first phase, VA receives the veteran’s application, establishes eligibility, and schedules a meeting with the veteran. In phase two, a counselor determines if the veteran has an employment handicap and, if so, the counselor and the veteran jointly develop a rehabilitation plan. The veteran then moves into training or education (phase three), if needed, and on to employment services (phase four) if training or education is not needed or after it is completed. During phase four, VA, state agencies, the Department of Labor, and private employment agencies help the veteran find a job. In phase five, the veteran is classified as rehabilitated once he or she finds a suitable job and holds it for at least 60 days. Veterans are eligible for program services if they have a 20-percent or higher service-connected disability and they have been determined by VA to have an employment handicap. The law defines an employment handicap as an impairment of a veteran’s ability to prepare for, obtain, or retain employment consistent with his or her abilities, aptitudes, and interests. Veterans with a 10-percent service-connected disability also may be eligible if they have a serious employment handicap. The eligibility period generally extends for 12 years, beginning on the date of the veteran’s discharge. Veterans found eligible for services can receive up to 48 months of benefits during the 12-year period. While in the program, most veterans receive services and equipment that may be required for beginning employment. For instance, veterans generally receive diagnosis and evaluation, as well as counseling and guidance, and some receive such aids as prostheses, eyeglasses, and educational supplies. They may also receive educational and vocational training; special rehabilitative services, such as tutorial assistance and interpreter services; a subsistence allowance; and employment assistance. Similar to the 1980 amendments, which affect the VA program, the Rehabilitation Act of 1973, as amended, authorized the Department of Education to provide eligible people (usually nonveterans) with services and assistance to enable them to obtain and maintain suitable employment. Education provides federal funds to help people with disabilities become employed, more independent, and better integrated into the community. The federal funds are chiefly passed to state vocational rehabilitation agencies that directly provide services and assistance to eligible people. The federal share of funding for these services is generally about 80 percent; the states pay the balance. In fiscal year 1995, about $2 billion in federal funds went to the state program, and about 1.3 million people received program services. The state vocational rehabilitation process, like the VA program process, comprises five phases, and state vocational rehabilitation clients who obtain and maintain a suitable job for at least 60 days are also classified as rehabilitated. However, in the state vocational program, suitable employment may not always involve wages or salary and may include, for example, working as an unpaid homemaker or family worker. To be eligible for the program, people must have a disability that is a substantial impediment to employment. However, when states are unable to serve all eligible applicants, priority is given to serving individuals with the most severe disabilities. The state vocational rehabilitation program offers a wide range of services to help its clients achieve their vocational goals. Examples of specific rehabilitation services include diagnosis and evaluation, counseling and guidance, vocational and educational training, physical restoration, adjustment training, on-the-job training, and employment assistance. If needed, services such as transportation to enable the individual to arrive at appointments for rehabilitation services or to get to work and income maintenance to cover additional costs incurred while the individual is receiving specific rehabilitation services are also provided. The 1980 Veterans’ Rehabilitation and Education Amendments made a significant change in VA’s vocational rehabilitation program by requiring VA to assist veterans in obtaining and maintaining suitable employment. This change expanded the scope of vocational rehabilitation beyond just training and marked a fundamental change in the focus and purpose of the program. However, despite previous GAO recommendations that VA fully implement this amendment and VA’s agreement to emphasize employment services, few veterans in the vocational rehabilitation program obtain jobs. Instead, VA staff continue to focus on providing training services because, among other reasons, they lack adequate training and expertise in job placement. In addition, our analysis of national program data revealed that the percentage of veterans in the program with serious employment handicaps has been steadily declining over the last 5 years. Our discussions with program officials also revealed that VA does not have readily available cost data associated with rehabilitating veterans. We found, on the basis of our review of select case files, that VA typically spends about $20,000 to rehabilitate each veteran. In our 1992 report, we noted that approximately 202,000 veterans were found eligible for vocational rehabilitation program services between October 1983 and February 1991. About 62 percent dropped out of the program before ever receiving a rehabilitation plan, and an additional 9 percent dropped out after receiving a plan. VA rehabilitated 5 percent of the eligible veterans, while the remaining veterans (24 percent) continued to receive program services. From October 1991 to September 1995, 201,000 veterans applied to the vocational rehabilitation program. VA classified approximately 74,000 (37 percent) veterans eligible. Of these veterans, 21 percent dropped out before receiving a plan, and another 20 percent dropped out or temporarily suspended their program after receiving a plan. VA rehabilitated 8 percent of the eligible veterans, and the remaining veterans (51 percent) were still receiving program services at the time of our analysis. VA officials told us that the vocational rehabilitation program has not been effective in placing veterans in suitable jobs. The primary reason for the low percentage of rehabilitations is the lack of focus on employment services, according to VA officials. The director of VA’s vocational rehabilitation program also acknowledged that the program’s rehabilitation rate needs to be improved and has established a program goal of doubling the number of successful rehabilitations over the next 2 years. Our analysis of current program participants showed that almost half of those veterans who were rehabilitated obtained employment in the professional, technical, and managerial occupations—fields such as engineering, accounting, and management. In addition, we found that the average starting salary of these veterans was about $18,000 a year. Moreover, veterans who were rehabilitated spent an average of 30 months in the program, while those who dropped out spent 22 months in the program. VA’s vocational rehabilitation program is primarily focused on sending veterans to training rather than on finding them suitable employment, according to VA officials. In 1992, VA issued guidance that emphasized the importance of finding suitable jobs for veterans and suggested that field offices begin employment planning as soon as a veteran’s eligibility for the program services was established. However, regional officials told us that staff do not generally begin exploring employment options until near the end of a veteran’s training. In 1992, we reported that 92 percent of veterans who received a plan between October 1983 and February 1991 went from the evaluation and planning phase directly into training programs, while only 3 percent went into the employment services phase. The remaining 5 percent went into a program designed to help them live independently or were placed in a controlled work environment. These figures remained virtually unchanged for the period we examined. For example, from October 1991 to September 1995, 92 percent of veterans who received a plan went from the evaluation and planning phase into training programs, while 4 percent went directly into the employment services phase. The remaining 4 percent entered an independent living program or were placed in extended evaluation, as shown in figure 1. Moreover, our analysis of national program data on current program participants showed that the vast majority of veterans in training were enrolled in higher education programs. For example, about 91 percent of such veterans were enrolled in a university or college. The remaining 9 percent were enrolled in vocational/technical schools or participated in other types of training programs, such as apprenticeships and on-the-job training. VA regional officials offered several reasons why staff continue to emphasize training over employment services. First, VA officials told us that it is difficult for staff to begin exploring employment options early because veterans entering the program expect to be able to attend college. Veterans acquire this expectation, according to VA officials, because the program is often marketed as an education program and not a jobs-oriented program. This image of the program as education oriented was also evident among some VA management. For instance, the director at one regional office we visited described the vocational rehabilitation program as the “best education program in VA.” A second reason for emphasizing training over employment, according to VA officials, is that program staff generally lack adequate training and expertise in job placement activities. At one office, for example, a counseling psychologist told us that program staff are not equipped to find veterans jobs because they lack employer contacts and detailed information on local labor markets. In fact, counseling psychologists at the regional offices we visited described the employment services phase as “the weakest part of the program.” Third, VA officials told us that large caseloads make it difficult for program staff to spend time exploring employment options with veterans. As one counseling psychologist responsible for managing over 300 cases said, “with such a large caseload it’s just easier to place veterans in college for 4 years than it is to find them a job.” According to VA’s Vocational Rehabilitation Service’s Chief of Program Operations, the optimal caseload per staff person is about 125. In recent years, there has been a shift in the type of disabled veteran participating in VA’s vocational rehabilitation program. For example, during the period 1991 to 1995, the percentage of program participants classified by VA as having a serious employment handicap declined from 40 percent to 29 percent, as shown in figure 2. During the same period, the percentage of program participants with disabilities rated at 50 percent or higher declined from 26 percent to 17 percent. Meanwhile, the percentage of program participants with disabilities rated at 10 and 20 percent increased from 34 percent to 42 percent. Figure 3 shows the changes in program participants’ characteristics for the period 1991 to 1995. In addition, our analysis of national program data provided demographic information on current program participants. For example, over 90 percent of the veterans who applied for program services were male, and the median age was 44 years. Also, at the time of their application, over 90 percent of the veterans had completed high school; of these, almost 25 percent had also completed 1 or more years of college. VA headquarters and regional agency officials did not know the costs associated with providing rehabilitation services to individual veterans. VA officials told us that, although cost information is located in individual veterans’ case files, it is not compiled or analyzed. Our review of 59 rehabilitated case files at four regional offices showed that VA spent, on average, about $20,000 to rehabilitate each veteran. The exact cost associated with rehabilitating veterans depends on the type and duration of services provided. Our analysis also showed that, generally, over half of the total cost of rehabilitation services consisted of subsistence allowances. Following are specific examples of costs associated with rehabilitating some clients. VA spent about $23,000 to rehabilitate a veteran who had a 10-percent disability for lower back strain. While in the program, the veteran obtained a BS degree in education and eventually obtained a job as an elementary school teacher earning $25,000 a year. VA spent over $20,000 to rehabilitate a veteran who had a 20-percent disability for lower back strain. The veteran, who was attending college under the Montgomery G.I. Bill and working part time before entering the program, obtained a bachelor’s degree in sociology and, ultimately, a position as an advocate for the elderly, earning less than $20,000 a year. Our review of 43 program dropout case files—“discontinued” case files—showed that VA spent, on average, about $10,000 each on veterans who did not complete the program. Following are specific examples of costs associated with veterans who did not complete the program. VA spent over $46,000 on tuition and subsistence to rehabilitate a veteran who had a 10-percent disability. The veteran dropped out of college after 4 years because of medical treatment for depression and marginal academic progress. VA spent over $6,000 on a 20-percent-disabled veteran who dropped out of the program after about a year. The veteran stopped attending college classes because of unsatisfactory academic progress. The state vocational rehabilitation program places over one-third of its clients in employment. Our analysis of 1993 national program data, the most current data available, showed that state agencies provide a mix of services to meet their clients’ rehabilitation needs. Our analysis also showed that most clients in the state program had severe disabilities. Furthermore, the state program spends, on average, about $3,000 on each rehabilitated client. From October 1991 through June 1995, about 2.6 million individuals were found eligible for state vocational rehabilitation program services. About 10 percent of these individuals dropped out of the state program before a rehabilitation plan could be initiated, and an additional 22 percent dropped out after a plan was initiated. The state agencies rehabilitated 37 percent of the eligible individuals, and the remaining individuals (31 percent) were still receiving program services at the time of our analysis. Clients in the state program are considered successfully rehabilitated even if they achieve outcomes other than employment that provides a wage or salary. For example, in fiscal year 1993, clients who obtained unpaid work or attained homemaker status composed about 9 percent of all rehabilitations. However, the majority of clients rehabilitated under the state program obtained such salaried positions as janitor, baker, office clerk, or cashier. On average, a person rehabilitated under the state program typically earned a starting salary of about $10,000 a year. Moreover, clients who were rehabilitated spent on average 17 months in the program, and clients who dropped out of the program after receiving a plan and at least one rehabilitative service spent 23 months. The state vocational rehabilitation program provides a wide range of services designed to help people with disabilities prepare for and engage in gainful employment to the extent of their capabilities. In fiscal year 1993, the state agencies provided evaluation and counseling services to almost all program participants. Additional services provided included restoration (33 percent of participants); transportation (33 percent); job finding services, such as resume preparation and interview coaching (31 percent); and college/university (12 percent), business/vocational training (12 percent), and on-the job training (6 percent). Our analysis of 1993 national program data showed that people with severe disabilities make up the majority of clients in the state vocational rehabilitation program. For example, people with severe disabilities composed 73 percent of the state program’s total client population. Our analysis of national data also provided demographic information on the clients who applied to the program. For example, almost 60 percent of the clients who applied for program services were male, and the median age was 34 years. In addition, at the time of their application, 43 percent of the clients had not completed high school, while 17 percent had completed 1 or more years of college. Our analysis of national program data showed that in fiscal year 1993, the state vocational rehabilitation agencies spent, on average, about $3,000 on each client who was rehabilitated. State agency staff spent funds providing or arranging services on behalf of clients, including assessment, training, medical services, transportation, and personal assistance. These costs exclude costs incurred for program administration and for salaries of counselors and other staff, and the state vocational program does not provide clients money for basic living expenses. Following are examples of costs associated with rehabilitating clients, which we obtained from our review of case files of 41 rehabilitated clients at four regional offices. In one case, the state program spent about $4,000 to rehabilitate an illiterate client suffering from mild retardation. The client was severely disabled and had not completed high school. The client was provided adjustment training and obtained a job working 3 hours a week as a stock person at a hardware store making $4.50 an hour. In a second case, the state program spent about $6,000 to rehabilitate a client with a learning disability and chronic back pain. The client was severely disabled but had graduated from high school. The client was provided clerical training and obtained a job working full time as a food service attendant making $4.50 an hour. The national data also showed that the state program spent, on average, about $2,000 on each client who did not complete the program after receiving a plan. Following are examples of costs associated with clients who did not complete the program, which we obtained from our review of 40 discontinued case files. In one case, the state program spent about $4,500 on a client who dropped out because she became pregnant. The client was deaf and classified as severely disabled. She had problems communicating and had not completed high school. The client’s rehabilitation goal involved pursuing an associate’s degree and obtaining a job as an office clerk. In a second case, the state program spent about $3,500 on a client who dropped out because he lacked the motivation to continue in the program. The client, who suffered from epilepsy and moderate retardation and was classified as severely disabled, was provided work adjustment training. In response to prior GAO and VA reports that recommended that VA emphasize finding jobs for veterans, VA has begun to reengineer its vocational rehabilitation program. The overall objective of VA’s reengineering effort is to increase the number of veterans who obtain suitable employment through improvements in program management. Under new program leadership, VA’s Vocational Rehabilitation and Counseling Service established a design team in 1995 to restructure the program by focusing on finding veterans suitable employment, making use of automation, and identifying factors that detract from program efficiency. VA consulted with state and private-sector vocational rehabilitation officials, veterans’ service organizations, the Department of Labor, and private contractors to help it identify needed program improvements. VA’s design team has identified several key initiatives aimed at improving program effectiveness. For example, VA plans to emphasize employment by exploring job options with veterans before sending them to training. VA also plans to develop marketing strategies that emphasize employment services. This initiative may involve revising existing pamphlets and brochures and developing informational videos. Further, VA plans to assess and develop program staff skills to ensure that staff have the necessary expertise to provide employment services. VA is also piloting an automated data management system designed to capture key information on program participants, such as the cost of providing rehabilitation services. VA officials told us that this information would be helpful in targeting ways to make the program more cost effective. VA also plans to conduct nationwide telephone surveys to determine why veterans drop out of the program. Officials told us that knowing this information will help them better identify problems veterans encounter with program services and develop plans that enhance veterans’ chances of successfully completing the program. VA is in the early stages of its reengineering effort and has not implemented any of the design team’s initiatives. The Chairman of VA’s design team told us that VA plans to begin implementing these initiatives nationwide by the end of fiscal year 1997. Despite a legislative mandate enacted 16 years ago requiring VA to help program participants obtain suitable jobs and prior GAO reports documenting VA’s limited success, VA’s vocational rehabilitation program continues to rehabilitate few disabled veterans. Currently, new program leadership recognizes the need to refocus the program toward the goal of employment and has taken steps to improve the program’s effectiveness. However, the concerns addressed in this letter are long standing, and VA’s reengineering efforts have not been completed. The success of VA’s efforts will depend on which initiatives VA adopts and how they are implemented. We received comments from the Department of Education and VA on a draft of this report. Education agreed with our findings and offered some technical suggestions, which we incorporated where appropriate. VA said it generally agreed with our findings and that its current reengineering initiative will successfully address all of the concerns we raised. However, VA cited a number of concerns with the information contained in the draft. For example, VA took issue with our finding that 8 percent of eligible veterans are rehabilitated. Instead, VA claims that 32 percent are rehabilitated and that this rate compares favorably with the 37-percent rehabilitation found in the state program. We disagree. VA based their rehabilitation percentage on the number of veterans who left the program (about 19,000)—a combination of veterans who dropped out or interrupted their program of services, as well as those who were rehabilitated—as opposed to the total number of eligible veterans (about 74,000). VA’s approach inflates the VA rehabilitation rate. Using VA’s approach, the state program would have an even higher rehabilitation rate—more than 60 percent. The fact remains, however, that of the 74,000 veterans found eligible for program services, 6,000 successfully completed the program. VA also took exception with our discussion of its lack of focus on employment services. VA contends it has consistently focused on the necessity of providing meaningful employment services, a goal that is outlined in policy directives and reinforced with comprehensive staff training. Our report acknowledges that VA issued guidance in 1992 that emphasized employment services. However, VA staff that administered and implemented the program in the four locations we visited told us that they do not emphasize employment until near the end of a veteran’s training. Furthermore, the Chairman of VA’s design team, an individual charged with evaluating and restructuring the program, told us that the primary reason for the program’s low rehabilitation rate is VA’s lack of focus on employment services. Regarding VA’s claim it provides comprehensive staff training, the Program Operations Chief told us that other than a week-long seminar on employment services presented about 2 to 3 years ago, VA headquarters has not sponsored staff training in employment assistance. Further, as already reported, staff in the regional offices that we visited told us they are not adequately trained in job placement activities. VA also took issue with our discussion of its lack of knowledge of the costs associated with providing rehabilitation services to individual veterans. VA claims that it has this information and can retrieve it at any time, although doing so is a laborious process. However, we saw no evidence that VA officials knew the costs associated with providing rehabilitation services. Neither the Chief of Program Operations nor officials located in the four regional offices that we visited could provide us with the costs associated with rehabilitating a veteran. Instead, we were always directed to the case files and, in some regional offices, to the finance section to obtain this information. VA also expressed concern that our random sample of program participant cases was not representative of the veterans that VA serves. VA asserted that “a more appropriate sample could readily come up with examples of veterans with more profound disabilities who are earning handsome salaries as a result of their participation in VA’s vocational rehabilitation program.” As we have pointed out, the results of our sample of 102 individual veteran case files are neither representative nor generalizable to all program participants. Our purpose in sampling program participants was to furnish examples of costs associated with providing rehabilitation services, not to demonstrate the severity of disabilities represented in the program or the average salaries of program participants. We addressed the issues of disability severity and salary using VA’s national database and discussed them in other sections of the report. VA’s comments in their entirety appear as appendix II. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to the Secretary of Veterans Affairs and other interested parties. This work was performed under the direction of Irene Chu, Assistant Director, Veterans’ Affairs and Military Health Care Issues. If you or your staff have any questions, please contact Ms. Chu or me on (202) 512-7101. Other major contributors to this report are listed in appendix III. We designed our study to collect national information on the characteristics of VA and state vocational rehabilitation clients, the services they received, and the outcomes they achieved. We also obtained information on the costs associated with providing rehabilitation services to clients in each program. In doing our work, we examined VA and Department of Education databases. We also interviewed VA and Education officials at the national and regional levels during site visits at VA and state vocational rehabilitation facilities in four judgmentally selected locations. We examined VA and Education vocational rehabilitation databases to obtain national information on the percentage of clients rehabilitated, client characteristics, and services provided. However, we did not verify the information included in the databases. To determine the percentage of veterans rehabilitated, we analyzed VA’s Chapter 31 Target System database for the period October 1991 through September 1995. We also compiled information on client characteristics of and services provided to veterans currently participating in the program. We define current participants as veterans who were not rehabilitated or discontinued prior to the beginning of fiscal year 1995 and were in one of the program’s five phases on February 7, 1996. For the state vocational rehabilitation program, we analyzed data from two Education databases. To address the percentage of the clients rehabilitated, we reviewed Education’s Quarterly Cumulative Caseload Reports for October 1991 through June 1995. This report provides aggregate data on the cases handled by state rehabilitation agencies. To obtain information on demographic characteristics and services provided, we analyzed Education’s Case Service Reports. The Case Service Reports contain information collected from the state agencies at the end of each fiscal year on the characteristics of each client whose case was closed that year, as well as on the general types of services that each client received and his or her employment status in the week of case closure. At any particular time, Education may be waiting for original or corrected data from one or more states for 1 or more years. At the time we began our study, the most recent full year for which largely complete data were available was fiscal year 1993. We conducted site visits at VA regional offices and state vocational rehabilitation agencies at four locations from January 1996 through March 1996. We visited VA and state vocational rehabilitation facilities in Milwaukee, Wisconsin; New Orleans, Louisiana; Roanoke, Virginia; and Portland, Oregon. We selected the sites judgmentally to include VA and state agencies that (1) were located in different regions, (2) were varied in staff size and workload, and (3) had ongoing initiatives to improve their vocational rehabilitation program. During these site visits, we interviewed vocational rehabilitation officials on various aspects of the program operations, reviewed selected case files, and discussed the specific cases with program specialists. At each VA regional office and state agency visited, we randomly selected and reviewed 9 to 12 case files of program participants who had been rehabilitated or had dropped out of the program between January 1 and June 30, 1995. Because the total number of rehabilitated cases available at VA’s field office in Portland, Oregon, was relatively small, we reviewed all 30 cases. We reviewed a total of 183 vocational rehabilitation cases: 102 at VA’s regional offices and 81 at the state agencies. These cases did not compose a representative sample of each site’s rehabilitation or dropout cases; thus, our results cannot be generalized. From case file reviews and discussions with program specialists, we obtained detailed information on client characteristics; services provided; and, when applicable, the type of employment obtained and starting salary. Also from the case files, we determined the costs associated with providing rehabilitation services to program participants, such as how much was spent for basic education and vocational training, readjustment training, physical restoration, and other support services. Irene Chu, Assistant Director, (202) 512-7101 Jaqueline Hill Arroyo, Evaluator-in-Charge, (202) 512-6753 Julian Klazkin, Senior Attorney Steve Morris, Evaluator Michael O’Dell, Senior Social Science Analyst Jeffrey Pounds, Evaluator Pamela Scott, Communications Analyst Joan Vogel, Senior Evaluator (Computer Science) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Veterans Affairs' (VA) vocational rehabilitation program, focusing on: (1) the percentage of rehabilitated veterans; (2) the services provided; (3) the characteristics of clients served; (4) the cost of rehabilitation; and (5) VA efforts to improve program effectiveness. GAO found that: (1) despite the 1980 legislation requiring VA to focus its rehabilitation program on finding disabled veterans suitable employment and subsequent GAO reports recommending that VA implement this legislation, VA continues to place few veterans in jobs; (2) VA officials told GAO that the percentage of veterans classified as rehabilitated is low because the program does not focus on providing employment services; (3) instead, VA continues primarily to send veterans to training, particularly to higher education programs; (4) GAO's analysis of national program data showed that the characteristics of program participants are changing and that VA does not have readily available cost data associated with providing rehabilitation services to individual veterans; (5) GAO's review of over 100 case files, however, showed that VA spent, on average, about $20,000 on each veteran who gained employment and about $10,000 on each veteran who dropped out of the program; (6) generally, over half of the total costs of rehabilitation services consisted of payments to assist veterans in covering their basic living expenses; (7) with regard to Education's state vocational rehabilitation program, GAO's analysis of national program data showed that over the last 5 years (1991-1995) state agencies rehabilitated 37 percent of the approximately 2.6 million individuals eligible for vocational rehabilitation program services, while about 31 percent continued to receive program services; (8) the state agencies provide a wide range of rehabilitative services, from physical restoration and transportation to college education and on-the-job training; (9) in addition, a majority of the program participants had severe disabilities; (10) moreover, national program data showed that state vocational rehabilitation agencies spent, on average, about $3,000 on each client who achieved employment and about $2,000 on each client who dropped out of the program; (11) the state program does not provide funds to cover client living expenses; (12) in response to prior GAO and VA findings and recommendations, VA recently established a design team to identify ways of improving program effectiveness; (13) the team's overall objective is to increase the number of veterans who obtain suitable employment through improvements in program management; (14) the team is also looking at ways to improve staff skills in job finding and placement activities; and (15) VA hopes to begin implementing program changes in fiscal year 1997.
Youth aiming to develop advanced skills, compete at a high performance level, and achieve competitive excellence in a sport have a variety of options for honing their skills. Youth can participate through private athletic clubs—local sport-specific organizations that serve athletes who compete, or may be interested in competing, at the highest performance level. Generally, these clubs are part of a larger sport network under the umbrella of the USOC. The Amateur Sports Act of 1978 established the USOC as a federally chartered nonprofit corporation that serves as the centralized body for U.S. Olympic sports. In 1998, the Amateur Sports Act was revised by the Ted Stevens Olympic and Amateur Sports Act. Under the act, the USOC is authorized to recognize NGBs, which govern their respective sports and recommend athletes to the USOC for participation in the Olympic Games. Currently, the USOC recognizes 47 NGBs. The act sets forth a number of purposes for the USOC, including to exercise jurisdiction over U.S. participation in the Olympic Games and the organization of the Olympic Games when held in the United States. Other purposes include to provide swift resolution of conflicts and disputes involving athletes and NGBs; to coordinate and provide information on training, coaching, and performance; and to encourage and support research, development, and dissemination of information in the areas of sports medicine and sports safety. The USOC may provide financial support and other resources to the NGBs, as needed and as the USOC considers appropriate. All members of the USOC are organizations, such as NGBs; the USOC has no individual members. Members of the NGBs may include youth and adult athletes, coaches, and other staff, although some NGBs may only have organizational members. The USOC and NGBs can impose various requirements as a condition of membership, but do not govern employment practices of clubs. Clubs may also be members of regional affiliates or associations of their sport, which generally serve as intermediaries between the NGB and the local clubs, and as the governing body for the local club within their region. (See fig. 1.) The USOC established the SafeSport program, an athlete safety program that addresses misconduct in sports through information, training, and resources. The program was born out of a working group convened by the USOC in 2010 to develop a set of recommendations for promoting safe training environments in sports. To deliver its recommendations, the working group sought input from a range of stakeholders. The subsequent creation of the SafeSport program included one-on-one discussions and a review of relevant research and best practice documents, including a resource document on child sexual abuse prevention from the Centers for Disease Control and Prevention (CDC) within HHS. The USOC’s minimum standards policy for the program required each NGB to adopt an athlete safety program by December 31, 2013, that included the following minimum components: a policy that prohibits and defines six forms of misconduct: bullying, hazing, harassment (including sexual harassment), emotional misconduct, physical misconduct, and sexual misconduct (including child sexual abuse); criminal background checks for individuals who are in a position of authority over or have frequent contact with athletes; an education and training program covering key components of their athlete safety program by January 1, 2014; a procedure for reporting misconduct; and a grievance process to address allegations of misconduct following a report or complaint. Youth seeking to compete at a high performance level and achieve competitive excellence in a sport can also participate in sports camps held on college and university campuses. For example, some colleges and universities offer sport-specific skill building through youth sports camps and instructional clinics held on their campuses. While the offerings vary by campus, such camps and clinics are available for a variety of sports, may be offered as day or overnight camps, and may range in duration from a few days to several weeks. In addition, the degree to which colleges and universities operate and oversee the camps can vary. For example, youth sports camps may be operated by the college or university’s athletic department, by a private entity that contracts with the college or university to use its facilities, or by a combination of the two. While child sexual abuse—the act of forcing a child to engage in sexual activity with a perpetrator—is criminal in nature, according to research, perpetrators of such abuse typically exhibit other inappropriate, and sometimes noncriminal, behaviors. These behaviors may be displayed on a continuum and may include grooming, sexual misconduct, and child sexual abuse. (See fig. 2.) Federal agencies engage in various efforts to prevent and respond to the sexual abuse of a broad population of youth, and these efforts may apply to youth athletes, depending on the circumstances. Some of these federal efforts may help prevent or respond to the sexual abuse of youth athletes in both private athletic clubs and university sports camps. For example, HHS and Justice provide resources in the areas of sexual violence prevention, reporting, and response practices, and the Federal Bureau of Investigation (FBI) has a role in investigating incidents of child sexual abuse that may constitute federal crimes. Other federal activities may influence postsecondary schools’ efforts to prevent and respond to the sexual abuse of youth athletes at sports camps held on their campuses. Specifically, Education and Justice oversee school compliance with Title IX, which prohibits sex discrimination, including sexual harassment and abuse, in any education program or activity that receives federal funds. In addition, Education oversees compliance with the Clery Act, which requires schools that participate in federal student aid programs to annually disclose statistics on certain crimes, including sex offenses, that occur on or near their campuses. (See table 1.) CDC and the National Center for Missing and Exploited Children (NCMEC), a nonprofit organization that receives Justice funding, each developed suggested practices for preventing and responding to sexual abuse within youth-serving organizations, which may include youth athletes in private athletic clubs and college and university sports camps. Both CDC and NCMEC’s resources emphasize similar categories of suggested practices: conducting an organizational self- assessment; screening staff for risk factors; defining behavioral guidelines and creating safe environments; training staff on sexual abuse and misconduct; monitoring behavior; and developing reporting and response strategies when complaints or allegations are made (see table 2). The NCMEC resource, in particular, includes information on how to address the unique interactions that occur between coaches and youth athletes. For example, NCMEC’s resource provides references to sexual abuse prevention and response programs, online training available through selected athletic organizations, and an article on developing appropriate relationships between coaches and athletes. NCMEC’s resource also offers youth-serving organizations some considerations regarding background checks to screen applicants. NCMEC suggests organizations use name-based and fingerprint-based criminal history checks in addition to other screening tools, such as interviews and reference checks. According to NCMEC, name-based checks typically offer greater accessibility and more timely results. However, based on a federal pilot program through which NCMEC assisted certain youth- serving organizations in conducting nationwide fingerprint-based criminal history checks, NCMEC officials concluded fingerprint-based checks were the most reliable way to identify those with disqualifying criminal histories in other states, under a different name, or under a different date of birth. Further, they concluded that fingerprint checks provide the greatest potential in confirming an individual’s identity. HHS and Justice also provide funding for sexual violence awareness and prevention programs. Although these efforts do not focus on athletic programs, funding has been used in some instances to address sexual abuse in athletics. For example, an official from the Pennsylvania Coalition against Rape told us that the organization, in part using funding from the CDC’s Rape Prevention and Education Program, has partnered with Pennsylvania State University (Penn State) to help the university strengthen its sexual abuse prevention and response activities in response to high-profile incidents of sexual abuse involving youth athletes on campus. Justice’s Office on Violence Against Women also provides funding to colleges and universities to help prevent sexual violence through its campus grant program, and officials from this office indicated that some college grantees have included athletic departments in their efforts to increase awareness of sexual violence. In addition to the suggested practices and prevention resources offered by HHS and Justice, the FBI has a role in investigating crimes against children that fall under federal jurisdiction, which may involve youth athletes who participate in private athletic clubs or college and university sports camps. For example, if a youth athlete is transported across state lines and sexually abused by a coach or other athletic personnel, the FBI may investigate the incident for possible violations of federal law. As researchers and experts on athlete abuse have noted, travel can be an area of significant risk for sexual abuse and misconduct, particularly if coaches travel alone with or share hotel rooms with athletes. FBI officials told us they are alerted to these crimes through various means, including direct reports from victims and families, witnesses, state and local law enforcement agencies, university and youth group officials, mandatory reporters, such as medical professionals and legal practitioners, and other federal law enforcement partner agencies. The FBI relies on a network of 71 Child Exploitation Task Forces that partner with 400 state and local law enforcement entities to help bridge federal, state, and local resources to address the challenges of child exploitation investigations, which may include those involving youth athletes. Still, according to the FBI, child sexual abuse offenses, such as those involving youth athletes, are inherently challenging, as children or their parents may be reluctant to report them, especially when their abuser is in a position of trust, as is often the case with sports coaches. Education oversees school compliance with Title IX and Clery Act requirements, which may apply to incidents of sexual abuse of youth athletes on college and university campuses. In general, Title IX and Education’s regulations implementing Title IX require schools to take steps to respond to sexual violence, including abuse, while the Clery Act requires schools to annually report statistics on sex offenses that occur on or near their campus to Education and in a security report for students and employees. However, these requirements would generally not apply in cases of sexual abuse of youth participating in sports at private athletic clubs unrelated to a postsecondary school. According to Education officials, determining a school’s obligations under either law would depend on the circumstances of each incident, which may be affected by the structure of sports camp operations, as well as where the abuse is alleged to have occurred. Title IX: Education officials said that Title IX would generally apply to cases of sexual abuse committed by school employees, and a school would be obligated to take steps to prevent and respond to such abuse, if the school knew or reasonably should have known about the abuse. By contrast, if alleged sexual abuse occurs in a program held on campus, but operated by an entity independent of the school, Education first determines whether Title IX applies and, if so, whether the school met its obligations under the law. One factor Education considers when making these determinations is whether the camp receives significant assistance from the school, such as use of a school’s facilities. As with cases of abuse committed by school employees, in cases of abuse committed by third parties, such as coaches at sports camps who are not employed by the university, Education also considers whether the school knew of or should reasonably have known of the alleged abuse to determine whether the school is obligated under Title IX to address the abuse. Clery Act: With respect to reporting campus crimes under the Clery Act, Education officials stated that a school’s reporting obligations would be triggered if sexual abuse occurs on or near a campus and is reported to a campus official responsible for Clery Act reporting. They explained that this would be true regardless of whether the school is involved with the daily operations of a sports camp. Education conducts reviews of and investigations into schools’ compliance with Title IX and Clery Act requirements to ensure schools are meeting their obligations under these laws. Justice may also review and investigate allegations of Title IX violations. Because Title IX protections cover a broad population and Clery Act requirements apply to a range of incidents, Education officials stated that they do not target their activities toward youth athletes specifically. Officials from Education and Justice told us they initiate Title IX investigations, and Education officials told us they initiate Clery Act investigations, both in response to complaints received alleging suspected violations and on their own initiative. Although Education officials told us of investigations into possible Title IX and Clery Act violations at Penn State, as of February 2015, both investigations were ongoing and determinations had not yet been reached, according to officials. Education and Justice officials said they were not aware of any other cases or complaints in recent years specifically alleging sexual abuse of youth athletes participating in sports camps on school campuses. However, given the purposes for which their data collection systems were established, neither Education nor Justice’s systems allow officials to conduct automated searches for cases involving the sexual abuse of youth athletes by coaches or other athletic personnel. Title IX: Education and Justice officials said their complaint intake systems do not track information about the relationship between the victim and the perpetrator or the age of the victim, given the broad focus of their enforcement activities. Clery Act: Education officials explained that crime statistics required by the Clery Act contain high-level information about incidents, which do not include information about the relationship between victims and perpetrators in sex offense cases. Federal agencies may take certain actions if a school is found to be out of compliance with either Title IX or the Clery Act. If Education’s OCR finds that a school has violated Title IX, it first seeks to establish voluntary compliance through a resolution agreement, which describes changes the school agrees to make to ensure its procedures for preventing and responding to sexual abuse comply with Title IX. If Education is unable to achieve voluntary compliance in a Title IX case, it may initiate proceedings to terminate the school’s federal funding, or refer the case to Justice for possible litigation. Additionally, Education’s FSA can impose fines on colleges for Clery Act violations. Education has published several guidance documents to assist schools in complying with Title IX and the Clery Act, which include information that may apply to the sexual abuse of youth participating in university sports camps. Guidance on Protected Individuals and Covered Settings Title IX: OCR guidance emphasizes that Title IX protects students from sexual harassment and sexual abuse carried out by a school employee. OCR guidance further specifies that any sexual activity between an adult employee and a student below the legal age of consent in his or her state is viewed as unwelcome and nonconsensual, and therefore sexual harassment under Title IX. Although the guidance focuses on school employees and students, according to a senior OCR official, the same principles would apply in cases of sexual activity between adult coaches and youth athletes participating in sports camps held on campus. Clery Act: OPE guidance specifies that all crimes covered by the Clery Act should be counted in schools’ crime statistics for their annual security reports and reports to Education, even if they involve individuals not associated with the school. According to officials from OPE and FSA, the sexual abuse of youth athletes participating in sports camps held on campus are covered by the Clery Act. Guidance on Training and Education Title IX: OCR’s Title IX guidance states that schools should provide training about how to identify and prevent sexual abuse to all employees likely to witness or receive reports of sexual abuse, including athletic coaches. OCR also explained in guidance that schools are responsible for developing policies that prohibit inappropriate conduct by school personnel and procedures for identifying and responding to such conduct. Such policies, OCR guidance states, could include a code of conduct that addresses grooming—behavior intended to establish trust with a minor to facilitate future sexual activity. In our prior work on sexual abuse by K-12 school personnel, experts cited behavioral codes of conduct and awareness and prevention training on sexual abuse as key tools for preventing abuse. Furthermore, experts said identifying and addressing violations of conduct, including those that fall short of abuse, as they occur could help prevent future abuse. Clery Act: In October 2014, Education issued final regulations implementing recent amendments to the Clery Act which, among other things, define requirements for schools to offer sexual violence prevention and awareness programs to employees, including athletic personnel. OPE and FSA officials stated that they instructed schools to provide training and education on sexual violence, which may include populations involved in youth sports programs on campus. Officials from OPE and FSA also confirmed that schools should offer training to temporary hires for youth sports camps. According to OPE officials, while schools are strongly encouraged to mandate training, such a requirement was not included in the final Clery Act regulations because it was not required by the statute. Officials said that concerns were raised during the negotiated rulemaking process that mandating training would be burdensome for schools with large numbers of students and employees. Guidance on Reporting and Response Title IX: In its Title IX guidance, OCR recommends that schools working with minors incorporate relevant state and local mandatory requirements for reporting child abuse and neglect into their policies, as schools may have reporting obligations to local child protective services and law enforcement agencies. OCR guidance also states that individuals designated as responsible employees are obligated to report alleged incidents of sexual abuse to school officials. To assist schools in appropriately responding to reported cases of sexual abuse, which may include youth athletes, OCR guidance states that schools should consider potential conflicts of interest when investigating reports of alleged sexual abuse. OCR officials confirmed that employees from a college’s athletic department should not be responsible for conducting investigations of suspected sexual abuse of youth athletes. Clery Act: OPE guidance outlines schools’ Clery Act obligations, which include reporting the number of certain crimes, including sex offenses, occurring on or near their campuses in an annual security report, submitting those statistics to Education, and maintaining a daily crime log to record information about all reported campus crimes. FSA officials also reported providing training that instructed schools to inform those employees with significant responsibility for campus activities, such as athletic directors and coaches, of their duty to report any crimes on campus under the Clery Act. FSA officials said they have also responded to inquiries from schools about how to set up procedures to ensure reports are made in light of a recent high profile case at Penn State where questions were raised about reporting suspected sexual abuse of youth athletes. In response to these inquiries, FSA officials told us they explained to colleges that designating a Clery compliance officer—an individual responsible for coordinating a college’s Clery Act activities— can help colleges ensure they have an individual on campus who is aware of and can enforce Clery Act requirements across different campus departments. Each of the 11 athletic programs we visited reported conducting some type of screening to determine if applicants are suitable to work with children. For the eight private athletic clubs we visited, determining minimum standards for who receives background checks and what type of check is used is the responsibility of the NGBs of their sport or their regional affiliates. At sports camps operated by the three universities we visited, the school determines who gets checked and in what ways. Annual or bi-annual screenings of staff were the most frequently used method among the selected athletic programs we visited. The most commonly used screening method of all the athletic programs we visited was the name-based criminal background check, which involves comparing the names, dates of birth, and Social Security numbers of individuals to information collected by private vendors from state and local court and criminal records. These background checks involve scanning national or federal criminal databases and sex offender registries. While relying on name-based checks, sports camps at two universities we visited also used fingerprint checks in certain instances, such as when employees work at camps for multiple years, and for volunteers. Officials acknowledged the benefits and challenges of using name-based and fingerprint-based checks. Although NCMEC officials cited some advantages to using name-based criminal history checks from private screening vendors, such as availability and timeliness, athletic program officials we spoke with raised concerns about the completeness and accuracy of those checks, and some told us of their preference for fingerprint-based checks. For example, NGB officials told us that due to the number of background check vendors in the marketplace and the various databases they use, it can be difficult for consumers to know the quality of the vendors and of the information in their databases. Information compiled by vendors is typically drawn from a variety of state and county law enforcement databases which may not be frequently updated to ensure criminal histories are complete and accurate, according to an official from one child sexual abuse prevention organization. Officials from two of the three NGBs we talked with told us they would prefer to use fingerprint-based background checks, with one citing NCMEC’s conclusions from the pilot program that fingerprint-based checks provide greater accuracy in identifying an individual than name- based checks. Officials at both private athletic clubs and university sports camps told us, however, that the cost of fingerprint-based background checks was a concern. GAO previously conducted work on FBI criminal history checks for non-criminal justice purposes and found that state law enforcement authorities often charge fees for fingerprint-based checks. One official from an organization that uses sports programs to engage at-risk youth explained that these fees can range from $25 to $100 per check and can be cost prohibitive for organizations that rely on large numbers of athletic personnel. In addition to background checks, four youth athletic programs whose officials we met with reported having screening policies that called for applicant interviews, and reference checks of applicants were generally not required among the 11 athletic programs, according to program officials. According to officials at both private athletic clubs and university sports camps, because sports communities are often small and well acquainted, applicants are typically referred by coaches and other athletic personnel, and formal screening practices such as targeted interview questions and reference checks are not commonly used. Officials at one of the private athletic clubs that reported conducting applicant interviews told us there is no need for specific interview questions to weed out perpetrators of sexual abuse because they could detect such offenders based on appearance or demeanor. However, as noted by one national child advocacy center, many perpetrators of child sexual abuse are well educated and respected members of the community and look like anyone else. Further, research has pointed out that the belief that offenders fit certain stereotypes can hinder child sexual abuse prevention. One NGB official explained that in local programs, there is a feeling that “everybody knows everybody,” and that it is unnecessary to ask for references. Though the NGB requires private athletic clubs to check references, this official expressed doubt that clubs are following through. The official told us the NGB is developing an enhanced tool for clubs that will offer sample questions for interviews and reference checks to address findings from a study which evaluated their SafeSport program against the CDC resource and identified weaknesses in screening policies, including the lack of personal interviews and reference checks in which youth protection is discussed. The policies of selected athletic programs whose officials we met with set basic standards of behavior between coaches and youth athletes. Private athletic clubs we visited generally had athlete safety policies that were based on guidance provided by their respective NGBs. Officials from each of the eight private clubs explained that in response to the recent creation of SafeSport policies by the USOC and their NGB, clubs either defer to their NGB’s policies or look to their NGB or regional affiliates for implementation guidance. For example, NGB SafeSport guidelines detail a variety of behavioral boundaries, prohibitions, and expectations for coach-athlete relationships, covering topics such as physical contact and social media use, among others. The SafeSport policies of all three NGBs whose officials we met with also provide guidelines local clubs may want to consider to ensure their SafeSport policies reflect and account for the particular setting of the club and facility. For example: One NGB’s social media guidance prohibits coaches from connecting to any athletes through a personal social media page or application, and suggests that any contact via social media should only take place through an official team page that parents are able to join. An official at one local hockey club we met with told us their policy is to lock all locker room doors when youth players are on the ice. In addition, this official explained that youth and adult hockey players sometimes use the locker rooms simultaneously at the club’s facility, and when this occurs, the club has two individuals serve as monitors in the locker room, one more than their NGB recommends. Policies of the three selected universities that operate sports camps also addressed issues of child protection, with some reflecting changes made to enhance youth safety in light of sexual abuse incidents at Penn State University. They covered practices to prevent sexual misconduct and monitoring and supervision of campers, among other topics. For example: One university swim camp developed a code of conduct for its staff that states any inappropriate interaction or relationship with a camper will result in immediate termination and, depending on the circumstance, notification of law enforcement authorities. Officials at all three universities we visited stressed the importance of preventing one-on-one interactions between staff and campers through a practice known as two-deep leadership, as a way to limit opportunities for misconduct. For example, in one camp’s counselor handbook, private one-on-one interactions were listed as a violation. One university recently changed its policies on access to campus facilities following a report evaluating another university’s response to a high profile case of sexual abuse of youth athletes on its campus. Using the report’s recommendations as a benchmark against its own policies, the university we visited changed access to its facilities so that electronic keycards previously used to enter a building would be deactivated once an individual no longer needed access. Over the last 2 years, another university created an office focused on youth on campus. This office developed a central registration system to maintain information about all university camps. In addition, they created a system of spot checks in which staff from this office conduct in-person visits to camps to ensure staff-to-camper ratios are followed and those present at the camp are on the central registration list, among other safety measures. All 11 of the youth athletic programs included in our study had policies requiring training of staff on youth athlete safety. The eight private athletic clubs required training of staff and volunteers with regular, routine, or frequent access to youth athletes, while the three universities we visited required all sports camp staff and volunteers to complete training. Training participation in both private clubs and university sports camps was generally monitored using a roster, according to program officials. Child sexual abuse prevention was a topic included in the required training for the selected athletic programs, and training generally covered topics such as how to identify the warning signs of, respond to, and report suspected abuse, including sexual abuse involving an athlete and athletic personnel. (See table 3.) All of the athletic programs we reviewed offered online training, and one program also offered training led by an instructor. Training is generally required at least every other year, and participants must complete one or more quizzes before receiving credit for completing the course. Some athletic programs we visited offer child sexual abuse prevention training and education resources to parents and athletes, though none require training of these groups. Further, officials from both types of athletic programs cited some challenges in implementing mandatory parent and athlete training. For example, officials from the eight private athletic clubs said they cannot require parents to take the training unless they are members of the NGB of their sport. One NGB official noted that some Canadian provinces require parents to take child abuse prevention training before their child can participate in athletic programs, but he believed that policy would be unlikely to be accepted in American sports. An official from the organization that spearheaded the effort to require training explained that it took the Canadian sports community years to embrace child sexual abuse prevention training. He told us that while organizations had background checks and other policies in place, education on the need for such policies and a greater understanding of the issue of abuse was also necessary. He explained that, in his view, perpetrators of sexual abuse are able to operate, in part, because of ignorance and indifference in the community. Eventually, through a survey, the organization found that the Canadian sports community supported the training and considered it a good recruitment and retention tool. Another NGB official told us that in its commissioned report to assess its SafeSport program against abuse prevention and response standards, parent training was cited as a weak area that needed improvement. Specifically, the author of the report recommended the NGB require parents to take SafeSport training, noting that few parents discuss sexual abuse prevention with their children and those who do often give inaccurate information. In response, the task force charged with addressing the report’s recommendations suggested the NGB strongly recommend parents take the training and encourage participation through an incentive program that would tie parent training on SafeSport with clubs’ SafeSport recognition status and funding. Regarding athlete training, this NGB’s task force recommended that its SafeSport committee and staff work with the training vendor to develop material appropriate for parents to use to discuss abuse prevention with children under age 12. At one university we visited, officials cited the short duration of camp programs, which generally last between 4 days and 1 week, as a barrier to expanding training to both parents and campers. Officials told us that camps do provide parents with general safety information such as emergency contact numbers and, in the case of one university camp, an overview of hiring practices. However, as one campus official explained, information on the Clery Act, sexual abuse prevention, or requirements to report is not currently provided to camp parents, although the university’s Clery statistics are available online. Athletic programs we visited developed some internal practices and policies to monitor compliance with athlete safety policies. Each of the three NGBs whose officials we talked with required the appointment of staff at the regional and, in some cases, local level to monitor SafeSport program implementation and oversee each private athletic club’s efforts to meet SafeSport requirements. In addition, one official explained that the NGB she represents is considering a recommendation made in the report that assessed its SafeSport program to conduct a baseline study to determine the extent of child abuse and the effectiveness of various prevention and response policies. University staff responsible for overseeing youth on campus also help monitor camp operations, according to officials from the universities we visited. For example, officials at one university we visited told us they developed a central registration system that tracked all of its camps, and included information on the program name, schedules, and the staff and athletes to assist with monitoring. This tracking system allows campus officials to identify who works with each camper, making it easier to investigate allegations of inappropriate interactions between staff and campers, according to one campus security official at the university. University officials also explained that staff overseeing youth programs on campus will periodically observe athlete interactions with camp staff to ensure that child protection policies, such as never having a camper alone with one adult, are being followed. An official from another university told us that while the coaches are in charge of day-to-day operations of the camps, the administrative oversight duties are primarily handled centrally by the university’s camp coordinator and human resources. For the more heavily enrolled sports, coaches have the support of the university’s director of operations who assists with the administration of the camp. An official from one private athletic club also explained how the wider sports community can help alert them to potential problems with interactions between athletic personnel or other adults and youth athletes. For example, the official told us a parent notified the head coach that a registered sex offender, who had inadvertently been let into the building, was observing youth hockey practice. The club responded by developing a policy of escorting all visitors at the rink. In cases where the sexual abuse of a youth athlete is observed or suspected, all 11 athletic programs we visited have policies that require contacting the appropriate law enforcement or child protection officials. According to athletic program officials we spoke with, their programs’ policies reflected their state’s requirements for reporting child sexual abuse, including how to report, which child welfare or law enforcement agencies are designated to receive reports, and who is responsible for reporting. In some cases, officials at universities we visited told us they changed policies to reflect recent changes in state law that address child abuse, including sexual abuse. Additionally, athletic programs may choose to designate additional staff as mandatory reporters beyond those persons designated by state child abuse and neglect reporting laws. For example, officials from one university we visited told us the university decided to designate all staff on campus as mandatory reporters of child abuse and neglect after their state identified university administrators as mandatory reporters. University officials explained that while designating all staff as mandatory reporters could lead to over-reporting of the same issues, it would be better than incidents not being reported. In addition to contacting the appropriate law enforcement authorities, the policies for each of the programs we visited included reporting observed or suspected abuse to internal officials. The policies of local private athletic clubs we visited provided options for reporting incidents to the club or regional affiliate’s SafeSport staff or NGB by phone, email, or letter, or anonymously through an online reporting system. At university sports camps, internal reporting structures varied, but could include university police, general counsel, the Clery compliance office, and in some cases, the university’s Title IX coordinator. Following reports of suspected sexual abuse to law enforcement, which may lead to criminal investigations, all of the selected athletic programs we met with reported having response policies that generally include separating the alleged perpetrator from athletes, and which may also bring immediate suspension. Policies also include conducting an internal investigation that could result in a range of sanctions, including bans from the athletic program if there are findings of wrongdoing. Under these policies, the accused is provided the right to receive a written notice of the complaint, present information during the investigation, and appeal the final decision. However, these reporting and response policies have not been put into action and tested because officials from each of the 11 athletic programs told us they were not aware of cases of alleged sexual abuse involving athletes and athletic personnel affiliated with their program. In addition to criminal investigations handled by law enforcement, any allegations of sexual abuse or misconduct occurring at the private athletic clubs we visited would generally be handled by their respective NGB, which would also be responsible for determining any violations of policy and resulting sanctions for cases in which violations are found. In some cases, NGB officials lead investigations of abuse complaints at private athletic clubs and, in the case of two NGBs, contract with private investigators to carry out the investigation once the NGBs have completed initial work to ascertain basic information about the complaint and seek cooperation from the alleged victim. At the university sports camps we visited, after making reports to law enforcement, multiple departments and offices, including the Title IX compliance and the Clery Act compliance offices, the university police, the general counsel’s office, and others may be involved in responding to such allegations internally. At one school we visited, an official explained that the Title IX coordinator and the human resources department would work together to either conduct an investigation of any incident involving a youth athlete participating in campus programs, or engage an outside investigator to conduct the investigation. According to the response policies of athletic programs we visited, athletic personnel found to have committed sexual abuse against a youth athlete can face penalties including bans from the sport or campus. Officials at all three NGBs we spoke with told us they can recommend imposing a lifetime ban from their sport on those found to have sexually abused youth athletes. Two of the three NGBs we reviewed also publish a list of banned coaches and other athletic personnel. However, officials from one NGB told us that they knew of multiple coaches that were banned from their sport only to find them moving on to coach in other sports. At the university sports camps we visited, if there are findings of wrongdoing the universities can terminate the university employee involved. One school told us they would ban perpetrators of sexual violence from campus for three years and if the perpetrator is an employee, the human resources department can impose a variety of sanctions, from mandatory counseling to suspension or termination. As with allegations of sexual abuse, in cases of inappropriate behavior that falls short of abuse, the policies of selected athletic programs we visited included a variety of disciplinary actions. For example, officials from one NGB explained that although sharing a hotel room with an athlete while traveling for competition was formerly a common cost-saving measure, SafeSport policies now strictly forbid it. However, they said some coaches have continued to share rooms, and in response the NGB issued formal warning letters to coaches who violated the policy. In addition, according to this official, more severe measures would be taken in the event of subsequent violations or if the shared room violation is combined with other violations. Officials from university camps told us they also have a variety of disciplinary actions to choose from when staff members are found to have acted inappropriately with campers, such as verbal or written reprimands or requiring staff to take leave. According to one USOC official, responding to allegations of sexual misconduct requires significant expertise. To address this, the official told us the USOC is working to create a United States Center for SafeSport that will establish an administrative proceeding for handling allegations of sexual misconduct as defined in a standardized safe sport code. This centralized approach to investigating and resolving allegations at the center would aim to deliver expert and consistent results across sports and sports organizations, as well as provide the ability to effectively share information about individuals who have been suspended or banned for policy violations. This official explained that all NGBs have adopted definitions for sexual misconduct established by the SafeSport program. According to this official, plans for the center, once created, include developing a national code to help individuals further distinguish between appropriate and inappropriate behavior, which can enhance people’s ability and willingness to report misconduct, and the ability of the center to ensure fair and equitable responses to incidents. According to this official: There are plans for the center to include a board of directors that would have no material conflicts or relationships with the USOC or any NGB to ensure independent review of all SafeSport cases. The center may compile and maintain a centralized list of those who are banned from USOC and NGB membership. The center is expected to be launched sometime during 2015. As of February 2015, work to secure insurance within budget and sustainable financial support for five years for the center was ongoing. We provided a draft of this report to the Departments of Education, HHS, and Justice for review and comment. Education, HHS, and Justice provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Education, HHS, and the Attorney General and interested congressional committees. The report will also be available at no charge on the GAO Web site at www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7215 or brownke@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. This appendix discusses in detail our methodology for addressing two research questions for athletic programs aimed at developing high performing athletes: (1) What role do federal agencies play in preventing and responding to the sexual abuse of youth participating in these programs? and (2) What steps do selected athletic programs take to prevent and respond to the sexual abuse of youth athletes? To address these questions, we reviewed relevant federal laws, regulations, and guidance. We conducted interviews with officials from the Departments of Education, Health and Human Services, and Justice, representatives of youth sports and education associations, and experts. We also conducted site visits to a nongeneralizable sample of youth sports camps on university campuses and private athletic programs in three states, which were selected based on the popularity of sports among youth, gender participation, college rankings in selected sports, and geographic diversity. We conducted this performance audit from February 2014 through May 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To determine the federal role in preventing and responding to the sexual abuse of youth athletes in these programs, we reviewed relevant federal laws, including the Child Abuse Prevention and Treatment Act (CAPTA); Title IX of the Education Amendments of 1972 (Title IX); and the Jeanne Clery Disclosure of Campus Security Policy and Campus Crime Statistics Act (Clery Act), among others. In addition, we reviewed Education’s regulations and guidance on Title IX and Clery Act requirements, and the agency’s policies and procedures for ensuring compliance with these requirements. However, we did not evaluate the effectiveness of Education’s policies and procedures to assess compliance with Title IX or the Clery Act. We also reviewed documents on suggested practices for preventing and responding to child sexual abuse in youth-serving organizations from the Centers for Disease Control and Prevention and the National Center for Missing and Exploited Children, an organization that receives grant funding from Justice. To examine the federal role in addressing the sexual abuse of these youth athletes, we also interviewed officials from Education, HHS and Justice, as well as experts on coaching, athletics administration, and sexual abuse. At Education, we spoke with officials in the Office for Civil Rights, the Office of Postsecondary Education, and the Federal Student Aid office. At HHS, we spoke with officials at the Administration for Children and Families and the Centers for Disease Control and Prevention. At Justice, we interviewed officials from the Office on Violence Against Women, the Office of Justice Programs, the Civil Rights Division, and the Federal Bureau of Investigation. Additionally, we interviewed officials from a range of relevant organizations, including the National Collegiate Athletic Association (NCAA), the Association of Title IX Administrators, the National Center for Missing and Exploited Children, the National Sexual Violence Resource Center, and the Pennsylvania Coalition Against Rape. To gather more in-depth information on the policies and practices selected private athletic clubs and university sports camps use to protect youth athletes from sexual abuse, we conducted site visits to a total of 11 athletic programs located in three states: California, Florida, and Texas. University sports camps were selected based on a sequence of steps, which included identifying the most popular sports among youth, identifying universities that offered youth camps and clinics and that had NCAA division I rankings in these sports, and geographic diversity. First, we identified the 10 most popular athletic programs for high school students during the 2012-2013 school year. We then identified universities that offered youth sports camps in five sports: basketball, football, gymnastics, swimming and diving, and volleyball. Our final selection took into account sport popularity, Division I rankings in these sports during the 2013-2014 school year, gender participation, camp and clinic operations, camp type (overnight, day, and commuter camps), and geographic diversity. We selected a total of three universities with youth sports camps and clinics in the above five sports. The university sports camps we selected were all run directly by the university. However, some sports camps on campuses may have a different operational structure; for example, they may be run by a private entity that is simply renting space on the campus. In addition to university sports camps, we visited a total of eight local private athletic clubs implementing an athlete safety program based on the SafeSport program established by the U.S. Olympic Committee (USOC). To select the private athletic programs, we considered gender diversity, recommendations by experts and those who conduct research on the intersection of sports and athlete sexual abuse, and sport diversity. We selected eight local private athletic clubs in the sports of figure skating, hockey, and swimming, and located in proximity to the universities we visited. In addition, we met with four of their regional affiliates. We also interviewed officials from the USOC, and the three NGBs of the Olympic sports we selected. At the local private athletic clubs we visited, we spoke with board members, athlete safety coordinators, and coaches; at the university campuses we visited we spoke with university compliance officials, university administrators, legal counsel, and camp directors. During each of these interviews, we collected information on policies, training materials, and other relevant documentation for preventing and responding to the sexual abuse of, and misconduct against, youth athletes by athletic personnel. We did not assess the sufficiency of these policies or how selected athletic programs implemented these policies. We also did not evaluate how selected athletic programs’ policies were applied to past cases of child sexual abuse as it was beyond the scope of this report. In addition, we did not evaluate whether any particular athletic program was in compliance with any state or federal requirements. Information we gathered on our site visits represents the conditions present at the time of our visit. We cannot comment on any changes that may have occurred after our fieldwork was completed. Our site visit findings cannot be generalized to the larger youth athletics population. In addition to the contact named above, Sara Kelly and Debra Prescott (Assistant Directors), Claudine Pauselli (Analyst-in-Charge), Christina Cantor, and Aimee Elivert made key contributions to this report. Also contributing to this report were James Bennett, Rachel Beers, Sarah Cornetto, Helen Desaulniers, Holly Dye, Nisha Hazra, Kristen Jones, Kathy Leslie, Kristy Love, Sheila McCoy, and Andrew Stavisky. Child Welfare: Federal Agencies Can Better Support State Efforts to Prevent and Respond to Sexual Abuse by School Personnel. GAO-14-42. Washington, D.C.: January 30, 2014. Child Care: Overview of Relevant Employment Laws and Cases of Sex Offenders at Child Care Facilities. GAO-11-757. Washington, D.C.: August 19, 2011. Child Maltreatment: Strengthening National Data on Child Fatalities Could Aid in Prevention. GAO-11-599. Washington, D.C.: July 7, 2011. K-12 Education: Selected Cases of Public and Private Schools That Hired or Retained Individuals with Histories of Sexual Misconduct. GAO-11-200. Washington, D.C.: December 8, 2010. Seclusions and Restraints: Selected Cases of Death and Abuse at Public and Private Schools and Treatment Centers. GAO-09-719T. Washington, D.C.: May 19, 2009. Residential Facilities: Improved Data and Enhanced Oversight Would Help Safeguard the Well-Being of Youth with Behavioral and Emotional Challenges. GAO-08-346. Washington, D.C.: May 13, 2008. Residential Treatment Programs: Concerns Regarding Abuse and Death in Certain Programs for Troubled Youth. GAO-08-146T. Washington, D.C.: October 10, 2007.
Media reports of the sexual abuse of youth athletes by their coaches have raised questions about how athletic organizations protect against such abuse. Research shows that the power dynamic between coaches and athletes aiming for high performance makes those athletes uniquely vulnerable to abuse. Although states are primarily responsible for addressing abuse, federal laws may apply, such as those that prohibit sex discrimination, including sexual abuse, in federally-funded education programs, require reports of campus crimes, and set minimum standards for state child abuse reporting laws. GAO was asked to review efforts to prevent and respond to the sexual abuse of youth athletes under age 18. GAO examined (1) the role of federal agencies in preventing and responding to sexual abuse of youth athletes, and (2) steps selected athletic programs aimed at high performance take to prevent and respond to such abuse. GAO reviewed relevant federal laws, regulations, guidance, and literature; visited a nongeneralizable sample of 11 athletic programs in three states selected on factors including sport popularity, gender participation, and geographic diversity; and interviewed federal agencies, relevant associations, and experts. Several federal agencies have roles in preventing and responding to the sexual abuse of a broad population of youth under age 18, which may include youth athletes. For example, the Department of Health and Human Services (HHS) and the National Center for Missing and Exploited Children, a nonprofit organization that receives Department of Justice (Justice) funding, published suggested practices for preventing child sexual abuse in youth-serving organizations. These suggested practices include defining and prohibiting misconduct; screening staff using fingerprint-based criminal background checks and other tools; and training staff on how to recognize, report, and respond to abuse. The National Center for Missing and Exploited Children also makes available information on child protection policies in youth sports settings, such as defining appropriate coach-athlete relationships. In addition, Justice may investigate alleged youth athlete abuse if there is a possibility the case constitutes a federal crime. These efforts may apply to youth in a range of settings. In addition, the Departments of Education (Education) and Justice oversee compliance with a civil rights law that protects individuals from sex discrimination, including sexual abuse, at schools that receive federal funding, which would generally include youth participating in sports camps on university campuses. Education also oversees postsecondary school compliance with a federal law requiring reporting of crimes, including sex offenses, that occur on or near campus. To ensure schools are meeting their obligations under these laws, Education and Justice conduct compliance reviews and investigations, and Justice participates in federal litigation involving claims of sex discrimination. Education also provides guidance and training to schools in areas such as developing codes of conduct, offering prevention and awareness training, and establishing reporting procedures. The 11 athletic programs GAO reviewed all reported using methods, such as screening and training staff, to help prevent and respond to the sexual abuse of youth athletes. For example, the selected athletic programs, which included 8 private athletic clubs and 3 universities operating youth sports camps, all reported using name-based background checks to screen staff members for a criminal history. Two universities that operated sports camps reported they sometimes used fingerprint-based checks, while officials from other athletic programs cited the high cost of fingerprint checks as a barrier. Training for athletic staff in the programs GAO visited included how to identify signs of, respond to, and report suspected incidents of sexual abuse. Policies for all of these athletic programs also require staff to report suspected abuse to law enforcement. Further, the selected programs had response policies that generally included removing the suspected offender from the program and conducting their own investigations, which could result in lifetime bans from the program. Athletic programs' policies also included a variety of possible disciplinary actions, such as warning letters or required leave, for addressing inappropriate behavior that falls short of sexual abuse. Some of these policies have been created or revised in recent years, including the policies that private athletic clubs are implementing based on the United States Olympic Committee's athlete safety program, SafeSport, which prohibits various forms of misconduct, including child sexual abuse. GAO did not assess the effectiveness of any of the selected athletic programs' policies. GAO makes no recommendations in this report. Education, HHS, and Justice provided technical comments, which we incorporated as appropriate.
In 2000, a report of the Surgeon General noted that tooth decay is the most common chronic childhood disease. Left untreated, the pain and infections caused by tooth decay may lead to problems in eating, speaking, and learning. Tooth decay is almost completely preventable, and the pain, dysfunction, or on extremely rare occasion, death, resulting from dental disease can be avoided (see fig. 1). Preventive dental care can make a significant difference in health outcomes and has been shown to be cost- effective. For example, a 2004 study found that average dental-related costs for low-income preschool children who had their first preventive dental visit by age 1 were less than one-half ($262 compared to $546) of average costs for children who received their first preventive visit at age 4 through 5. The American Academy of Pediatric Dentistry (AAPD) recommends that each child see a dentist when his or her first tooth erupts and no later than the child’s first birthday, with subsequent visits occurring at 6-month intervals or more frequently if recommended by a dentist. The early initial visit can establish a “dental home” for the child, defined by AAPD as the ongoing relationship with a dental provider who can ensure comprehensive and continuously accessible care. Comprehensive dental visits can include both clinical assessments, such as for tooth decay and sealants, and appropriate discussion and counseling for oral hygiene, injury prevention, and speech and language development, among other topics. Because resistance to tooth decay is determined in part by genetics, eating patterns, and oral hygiene, early prevention is important. Delaying the onset of tooth decay may also reduce long-term risk for more serious decay by delaying the exposure to caries risk factors to a time when the child can better control his or her health behaviors. Recognizing the importance of good oral health, HHS in 1990 and again in 2000 established oral health goals as part of its Healthy People 2000 and 2010 initiatives. These include objectives related to oral health in children, for example, reducing the proportion of children with untreated tooth decay. One objective of Healthy People 2010 relates to the Medicaid population: to increase the proportion of low-income children and adolescents under the age of 19 who receive any preventive dental service in the past year, from 25 percent in 1996 to 66 percent in 2010. Medicaid, a joint federal and state program that provides health care coverage for low-income individuals and families; pregnant women; and aged, blind, and disabled people, provided health coverage for an estimated 20.1 million children aged 2 through 18 in federal fiscal year 2005. The states operate their Medicaid programs within broad federal requirements and may contract with managed-care organizations to provide Medicaid benefits or use other forms of managed care, when approved by CMS. CMS estimates that as of June 30, 2006, about 65 percent of Medicaid beneficiaries received benefits through some form of managed care. State Medicaid programs must cover some services for certain populations under federal law. For instance, under Medicaid’s early and periodic screening, diagnostic, and treatment (EPSDT) benefit, states must provide dental screening, diagnostic, preventive, and related treatment services for all eligible Medicaid beneficiaries under age 21. Children in Medicaid aged 2 through 18 often experience dental disease and often do not receive needed dental care, and although receipt of dental care has improved somewhat in recent years, the extent of dental disease for most age groups has not. Information from NHANES surveys from 1999 through 2004 showed that about one in three children ages 2 through 18 in Medicaid had untreated tooth decay, and one in nine had untreated decay in three or more teeth. Compared to children with private health insurance, children in Medicaid were substantially more likely to have untreated tooth decay and to be in urgent need of dental care. MEPS surveys conducted in 2004 and 2005 found that almost two in three children in Medicaid aged 2 through 18 had not received dental care in the previous year and that one in eight never sees a dentist. Children in Medicaid were less likely to have received dental care than privately insured children, although they were more likely to have received care than children without health insurance. Children in Medicaid also fared poorly when compared to national benchmarks, as the percentage of children in Medicaid ages 2 through 18 who received any dental care— 37 percent—was far below the Healthy People 2010 target of having 66 percent of low-income children under age 19 receive a preventive dental service. MEPS data on Medicaid children who had received dental care— from 1996 through 1997 compared to 2004 through 2005—showed some improvement for children ages 2 through 18 in Medicaid. Comparisons of recent NHANES data to data from the late 1980s and 1990s suggest that the extent that children ages 2 through 18 in Medicaid experience dental disease has not decreased for most age groups. Dental disease is a common problem for children aged 2 through 18 enrolled in Medicaid, according to national survey data (see fig. 2). NHANES oral examinations conducted from 1999 through 2004 show that about three in five children (62 percent) in Medicaid had experienced tooth decay, and about one in three (33 percent) were found to have untreated tooth decay. Close to one in nine—about 11 percent—had untreated decay in three or more teeth, which is a sign of unmet need for dental care and, according to some oral health experts, can suggest a severe oral health problem. Projecting these proportions to 2005 enrollment levels, we estimate that 6.5 million children in Medicaid had untreated tooth decay, with 2.2 million children having untreated tooth decay involving three or more teeth. Compared with children with private health insurance, children in Medicaid were at much higher risk of tooth decay and experienced problems at rates more similar to those without any insurance. As shown in figure 3, the proportion of children in Medicaid with untreated tooth decay (33 percent) was nearly double the rate for children who had private insurance (17 percent) and was similar to the rate for uninsured children (35 percent). These children were also more than twice as likely to have untreated tooth decay in three or more teeth than their privately insured counterparts (11 percent for Medicaid children compared to 5 percent for children with private health insurance). These disparities were consistent across all age groups we examined. According to NHANES data, more than 5 percent of children in Medicaid aged 2 through 18 had urgent dental conditions, that is, conditions in need of care within 2 weeks for the relief of symptoms and stabilization of the condition. Such conditions include tooth fractures, oral lesions, chronic pain, and other conditions that are unlikely to resolve without professional intervention. On the basis of these data, we estimate that in 2005, 1.1 million children aged 2 through 18 in Medicaid had conditions that warranted seeing a dentist within 2 weeks. Compared to children who had private insurance, children in Medicaid were more than four times as likely to be in urgent need of dental care. The NHANES data suggest that the rates of untreated tooth decay for some Medicaid beneficiaries could be about three times more than national health benchmarks. For example, the NHANES data showed that 29 percent of children in Medicaid aged 2 through 5 had untreated decay, which compares unfavorably with the Healthy People 2010 target for untreated tooth decay of 9 percent of children aged 2 through 4. Most children in Medicaid do not visit the dentist regularly, according to 2004 and 2005 nationally representative MEPS data (see fig. 4). According to these data, nearly two in three children in Medicaid aged 2 through 18 had not received any dental care in the previous year. Projecting these proportions to 2005 enrollment levels, we estimate that 12.6 million children in Medicaid have not seen a dentist in the previous year. In reporting on trends in dental visits of the general population, AHRQ reported in 2007 that about 31 percent of poor children (family income less than or equal to the federal poverty level) and 34 percent of low-income children (family income above 100 percent but less than or equal to 200 percent of the federal poverty level) had a dental visit during the year. Survey data also showed that about one in eight children (13 percent) in Medicaid reportedly never see a dentist. MEPS survey data also show that many children in Medicaid were unable to access needed dental care. Survey participants reported that about 4 percent of children aged 2 through 18 in Medicaid were unable to get needed dental care in the previous year. Projecting this percentage to estimated 2005 enrollment levels, we estimate that 724,000 children aged 2 through 18 in Medicaid could not obtain needed care. Regardless of insurance status, most participants who said a child could not get needed dental care said they were unable to afford such care. However, 15 percent of children in Medicaid who had difficulty accessing needed dental care reportedly were unable to get care because the provider refused to accept their insurance plan, compared to only 2 percent of privately insured children. Children enrolled in Medicaid were less likely to have received dental care than privately insured children, but they were more likely to have received dental care than children without health insurance. (See fig. 5.) Survey data from 2004 through 2005 showed that about 37 percent of children in Medicaid aged 2 through 18 had visited the dentist in the previous year, compared with about 55 percent of children with private health insurance, and 26 percent of children without insurance. The percentage of children in Medicaid who received any dental care—37 percent—was far below the Healthy People 2010 target of having 66 percent of low-income children under age 19 receive a preventive dental service. The NHANES data from 1999 through 2004 also provide some information related to the receipt of dental care. The presence of dental sealants, a form of preventive care, is considered to be an indicator that a person has received dental care. About 28 percent of children in Medicaid had at least one dental sealant, according to 1999 through 2004 NHANES data. In contrast, about 40 percent of children with private insurance had a sealant. However, children in Medicaid were more likely to have sealants than children without health insurance (about 20 percent). While comparisons of past and more recent survey data suggest that a larger proportion of children in Medicaid had received dental care in recent surveys, the extent that children in Medicaid experience dental disease has not decreased. A comparison of NHANES results from 1988 through 1994 with results from 1999 through 2004 showed that the rates of untreated tooth decay were largely unchanged for children in Medicaid aged 2 through 18: 31 percent of children had untreated tooth decay in 1988 through 1994, compared with 33 percent in 1999 through 2004 (see fig. 6). The proportion of children in Medicaid who experienced tooth decay increased from 56 percent in the earlier period to 62 percent in more recent years. This increase appears to be driven by younger children, as the 2 through 5 age group had substantially higher rates of dental disease in the more recent time period, 1999 through 2004. This preschool age group experienced a 32 percent rate of tooth decay in the 1988 through 1994 time period, compared to almost 40 percent experiencing tooth decay in 1999 through 2004 (a statistically significant change). Data for adolescents, by contrast, suggest declining rates of tooth decay. Almost 82 percent of adolescents aged 16 through 18 in Medicaid had experienced tooth decay in the earlier time period, compared to 75 percent in the latter time period (although this change was not statistically significant). These trends were similar for rates of untreated tooth decay, with the data suggesting rates going up for young children, and declining or remaining the same for older groups that are more likely to have permanent teeth. According to CDC, these trends are similar for the general population of children, for which tooth decay in permanent teeth has generally declined and untreated tooth decay has remained unchanged. CDC also found that tooth decay in preschool aged children in the general population had increased in primary teeth. At the same time, indicators of receipt of dental care, including the proportion of children who had received dental care in the past year and use of sealants, have shown some improvement. Two indicators of receipt of dental care showed improvement from earlier surveys: The percentage of children in Medicaid aged 2 through 18 who received dental care in the previous year increased from 31 percent in 1996 through 1997 to 37 percent in 2004 through 2005, according to MEPS data (see fig. 7). This change was statistically significant. Similarly, AHRQ reported that the percent of children with a dental visit increased between 1996 and 2004 for both poor children (28 percent to 31 percent) and low-income children (27 percent to 34 percent). The percentage of children aged 6 through 18 in Medicaid with at least one dental sealant increased nearly threefold, from 10 percent in 1988 through 1994 to 28 percent in 1999 through 2004, according to NHANES data, and these changes were statistically significant. The increase in receipt of sealants may be due in part to the increased use of dental sealants in recent years, as the percentage of uninsured and insured children with dental sealants doubled over the same time period. Adolescents aged 16 through 18 in Medicaid had the greatest increase in receipt of sealants relative to other age groups. The percentage of adolescents with dental sealants was about 6 percent in the earlier time period, and 33 percent more recently. The percentage of children in Medicaid who reportedly never see a dentist remained about the same between the two time periods, with about 14 percent in 1996 through 1997 who never saw a dentist, and 13 percent in 2004 through 2005, according to MEPS data. More information on our analysis of NHANES and MEPS for changes in dental disease and receipt of dental care for children in Medicaid over time, including comments we received from HHS on a draft of the report and our response, more detailed data tables, and confidence intervals can be found in the report released today. The information provided by nationally representative surveys regarding the oral health of our nation’s low-income children in Medicaid raises serious concerns. Measures of access to dental care for this population, such as children’s dental visits, have improved somewhat in recent surveys, but remain far below national health goals. Of even greater concern are data that show that dental disease is prevalent among children in Medicaid, and is not decreasing. Millions of children in Medicaid are estimated to have dental disease in need of treatment; in many cases this need is urgent. Given this unacceptable condition, it is important that those involved in providing dental care to children in Medicaid—the federal government, states, providers, and others—address the need to improve the oral health condition of these children and to achieve national oral health goals. As you know, we have ongoing work for the subcommittee examining state and federal efforts to ensure that children in Medicaid receive needed dental services. We expect to report to the subcommittee on our findings and any recommendations in spring 2009. Mr. Chairman, this concludes my prepared remarks. I will be happy to answer any questions that you or other members of the Subcommittee may have. For information regarding this testimony, please contact Alicia Puente Cackley at (202) 512-7114 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Katherine Iritani, Assistant Director; Sarah Burton; and Terry Saiki made key contributions to this statement. Medicaid: Extent of Dental Disease in Children Has Not Decreased, and Millions Are Estimated to Have Untreated Tooth Decay. GAO-08-1121. Washington, D.C.: September 23, 2008. Medicaid: Concerns Remain about Sufficiency of Data for Oversight of Children’s Dental Services. GAO-07-826T. Washington, D.C.: May 2, 2007. Medicaid Managed Care: Access and Quality Requirements Specific to Low-Income and Other Special Needs Enrollees. GAO-05-44R. Washington, D.C.: December 8, 2004. Medicaid and SCHIP: States Use Varying Approaches to Monitor Children’s Access to Care. GAO-03-222. Washington, D.C.: January 14, 2003. Medicaid: Stronger Efforts Needed to Ensure Children’s Access to Health Screening Services. GAO-01-749. Washington, D.C.: July 13, 2001. Oral Health: Factors Contributing to Low Use of Dental Services by Low- Income Populations. GAO/HEHS-00-149. Washington, D.C.: September 11, 2000. Oral Health: Dental Disease Is a Chronic Problem Among Low-Income Populations. GAO/HEHS-00-72. Washington, D.C.: April 12, 2000. Medicaid Managed Care: Challenge of Holding Plans Accountable Requires Greater State Effort. GAO/HEHS-97-86. Washington, D.C.: May 16, 1997. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In recent years, concerns have been raised about the adequacy of dental care for low-income children. Attention to this subject became more acute due to the widely publicized case of Deamonte Driver, a 12-year-old boy who died as a result of an untreated infected tooth that led to a fatal brain infection. Deamonte had health coverage through Medicaid, a joint federal and state program that provides health care coverage, including dental care, for millions of low-income children. Deamonte had extensive dental disease and his family was unable to find a dentist to treat him. GAO was asked to examine the extent to which children in Medicaid experience dental disease, the extent to which they receive dental care, and how these conditions have changed over time. To examine these indicators of oral health, GAO analyzed data, by insurance status, from two nationally representative surveys of the Department of Health and Human Services (HHS): the National Health and Nutrition Examination Survey (NHANES) and the Medical Expenditure Panel Survey (MEPS). This statement summarizes the resulting report being released today, Medicaid: Extent of Dental Disease in Children Has Not Decreased, and Millions Are Estimated to Have Untreated Tooth Decay (GAO-08-1121). In commenting on a draft of the report, HHS acknowledged the challenge of providing dental services to children in Medicaid, and cited the agency's related activities. Dental disease remains a significant problem for children aged 2 through 18 in Medicaid. Nationally representative data from the 1999 through 2004 NHANES surveys--which collected information about oral health through direct examinations--indicate that about one in three children in Medicaid had untreated tooth decay, and one in nine had untreated decay in three or more teeth. Projected to 2005 enrollment levels, GAO estimates that 6.5 million children aged 2 through 18 in Medicaid had untreated tooth decay. Children in Medicaid remain at higher risk of dental disease compared to children with private health insurance; children in Medicaid were almost twice as likely to have untreated tooth decay. Receipt of dental care also remains a concern for children aged 2 through 18 in Medicaid. Nationally representative data from the 2004 through 2005 MEPS survey--which asks participants about the receipt of dental care for household members--indicate that only one in three children in Medicaid ages 2 through 18 had received dental care in the year prior to the survey. Similarly, about one in eight children reportedly never sees a dentist. More than half of children with private health insurance, by contrast, had received dental care in the prior year. Children in Medicaid also fared poorly when compared to national benchmarks, as the percentage of children in Medicaid who received any dental care--37 percent--was far below the Healthy People 2010 target of having 66 percent of low-income children under age 19 receive a preventive dental service. Survey data on Medicaid children's receipt of dental care showed some improvement; for example, use of sealants went up significantly between the 1988 through 1994 and 1999 through 2004 time periods. Rates of dental disease, however, did not decrease, although the data suggest the trends vary somewhat among different age groups. Younger children in Medicaid--those aged 2 through 5--had statistically significant higher rates of dental disease in the more recent time period as compared to earlier surveys. By contrast, data for Medicaid adolescents aged 16 through 18 show declining rates of tooth decay, although the change was not statistically significant.
Within FDA, the Office of Medical Products and Tobacco is responsible for providing leadership for the medical product centers and coordinating their plans, strategies, and programs. Under the office’s direction, three FDA centers have primary responsibility for overseeing medical products and developing strategic plans to guide their activities: The Center for Biologics Evaluation and Research (CBER) is responsible for overseeing most biologics, such as blood, vaccines, and human tissues. The Center for Drug Evaluation and Research (CDER) is responsible for overseeing drugs and certain therapeutic biologics. The Center for Devices and Radiological Health (CDRH) is responsible for overseeing devices and for ensuring that radiation- emitting products, such as microwaves and x-ray machines, meet radiation safety standards. Several offices within FDA provide additional oversight and management support to assist the three medical product centers. FDA’s Office of Policy, Planning, Legislation, and Analysis supports strategic planning at the agency-wide, program-specific, and center levels across FDA, which included coordinating the development of the SIMP and FDA’s agency- wide strategic priorities document. FDA’s Office of Human Resources supports recruitment and workforce management activities. Finally, FDA’s Office of Regulatory Affairs conducts field activities for all of FDA’s medical product centers, such as inspections of domestic and foreign establishments involved in medical products. The centers conduct pre- and post-market oversight of medical products, as well as formulate guidance, perform research, communicate information to industry and the public, and set priorities. Premarket oversight comprises review activities to ensure that medical products are safe and effective for use before they can be marketed in the United States. FDA’s premarket oversight typically begins when companies— known as sponsors—develop a medical product. Before beginning clinical trials (studies involving humans) for a new medical product, sponsors must submit an application so that FDA can preliminarily assess the product for safety. As part of its premarket oversight, FDA may also choose to inspect establishments producing medical products to ensure their manufacturing processes meet quality standards. Postmarket oversight includes review activities to both provide certainty that medical products are safe and effective after they have been marketed, and to enable FDA to take regulatory actions if a safety issue is identified, such as requiring that sponsors communicate new safety information to the public and health care providers or withdraw the product from the market. Examples of postmarket oversight include reviewing reports of adverse events to monitor the safety of marketed medical products and examining advertising and other promotional materials to ensure they are not false or misleading. FDA may require sponsors to provide additional information both before and after a product has been approved. For example, FDA may require medical product manufacturers to create a Risk Evaluation and Mitigation Strategy to ensure that the benefits of a medical product outweigh its risks. A significant portion of FDA’s annual appropriation consists of amounts derived from user fees paid by the medical products industry. Beginning in 1992 with prescription drugs, Congress has authorized the collection of user fees from the medical products industry to provide additional resources for certain FDA oversight activities. Each user fee program is subject to reauthorization every 5 years and supports different oversight activities across each of the centers, as illustrated in figure 1. In 2012, FDASIA reauthorized or authorized four user fee programs for medical products. It included the fifth reauthorization of the Prescription Drug User Fee Act of 1992 (PDUFA), which allows FDA to collect user fees from manufacturers of prescription drugs. It also included the third reauthorization of the Medical Device User Fee and Modernization Act (MDUFA), which allows FDA to collect user fees from manufacturers of medical devices. Congress also authorized two new user fee programs in FDASIA: the Biosimilar User Fee Act (BsUFA), and the Generic Drug User Fee Amendments Act (GDUFA). BsUFA authorizes FDA to collect user fees from manufacturers of biosimilars, which FDA may approve based on a sponsor’s ability to show that the product is highly similar to an FDA-approved biological product and has no clinically meaningful differences in terms of safety and effectiveness. GDUFA authorizes FDA to collect user fees from manufacturers of generic drugs. Prior to each user fee program reauthorization, FDA negotiates with representatives of each medical products industry to identify goals for how FDA should spend those user fees over the next 5-year authorization period. Once FDA and the industry reach agreement, the Secretary of Health and Human Services submits letters containing these commitments to Congress. The user fee commitments contain performance goals for FDA’s review activities, such as reviewing and acting upon a certain number of received medical product applications within certain time frames. User fee commitments may also require FDA to undertake certain actions, such as implementing agreed upon efficiency enhancements by a given date. FDA reports annually to Congress on progress made in achieving performance goals identified in each of the user fee commitments. These reports contain both descriptions of each center’s relevant oversight activities over the previous year, and data on its performance toward meeting user fee commitments. We found that the SIMP does not contain key elements of strategic planning and therefore does not present a comprehensive strategy across the medical product centers. Our previous work has shown that strategic planning for activities below the agency-wide level is a leading practice for successful agencies, and can help agencies integrate activities, align goals, and coordinate performance management across different parts of their organization. However, the SIMP does not fully contain several of these leading practices. Of the seven relevant strategic planning elements from GPRA, the SIMP fully contains two elements, partially contains four elements, and does not contain one element. In particular, we found that the SIMP contains a mission statement and describes how FDA incorporated input from Congress; it partially contains a description of its general goals and objectives, the strategies needed to achieve its goals and objectives, how its performance goals related to its general goals and objectives, and program evaluations used to review its goals and objectives; and it does not identify external factors that could significantly affect the achievement of its goals and objectives. Specifically, the SIMP presents high-level information on goals and performance measures for medical product oversight, but lacks detail on how it will be used or implemented. Each of the SIMP’s first two sections describes a goal—improving efficiency and developing the workforce, respectively—and lists planned or ongoing initiatives to achieve that goal. For most of these initiatives, rather than describe the necessary steps, planned accomplishments, or time frames for implementation, the SIMP provides a high-level description of what FDA expects to achieve. In addition, the SIMP’s summary states that the plan reflects coordination and cooperation among the centers to address their program-specific needs, share best practices, and share common solutions. However, FDA officials told us that they do not use the SIMP to address issues requiring center collaboration, and acknowledged that the plan did not represent the full range of working relationships among the centers. Moreover, the SIMP does not fully link its performance goals to its general goals and objectives. The SIMP instead describes performance measures related to FDA’s user fee commitments, even though several of the initiatives included in the plan are unrelated to these commitments. FDA officials explained that they focused the SIMP’s performance measures on user fee commitments rather than, for example, tying performance measures to each initiative, because user fee commitments are the main vehicle by which FDA assesses the efficiency of each medical product center’s premarket review. Additionally, groups we spoke with that represent the medical products industry did not view the SIMP as an effective strategic planning document for FDA. Of the five industry groups we interviewed, two were unfamiliar with the SIMP and the others did not see how its contents related to strategic planning. For example, representatives from one industry group said that the SIMP was neither integrated nor strategic, because it merely described the different activities of the centers rather than establishing one overarching strategic approach for all of the centers. Additionally, representatives from another industry group said that the SIMP lacked detail on how FDA would use it or implement the initiatives it described. FDA officials said that due to the circumstances around FDASIA’s enactment in 2012, they chose to develop the SIMP as a point-in-time document to address legislative requirements rather than as a strategic plan for medical product oversight. For example, agency officials said FDASIA required FDA to submit the SIMP within a year of enactment, during which time FDA was also developing its agency-wide strategic priorities document. Officials said that more time would have better enabled FDA to align the SIMP with agency-wide goals, and helped the agency to structure the plan as a strategic planning document. Officials also told us that leadership gaps in the Office of Medical Products and Tobacco, caused in part by turnover in the Deputy Commissioner position, created challenges when developing the SIMP. Officials said that, given these factors, the agency chose to develop a more limited document. Despite acknowledging that the SIMP was not intended to be an effective strategic planning document, FDA officials said that the SIMP’s development process was useful because it facilitated coordination and information sharing between the centers on how to achieve certain user fee goals. Nonetheless, FDA officials acknowledged the growing need for strategic planning across the medical product centers to improve center collaboration and address emerging issues, but said that it may not require a separate strategic plan. Officials said that some issues, such as staffing vacancies and coordination with other agencies, were better addressed at an agency-wide level. However, they indicated that integration and collaboration across the medical product centers are important for other issues that the agency is working to address, such as data sharing, evidence generation, biomarker integration, combination products, consistent terminology, patient engagement, and the medical product review process. FDA officials also said that these types of issues have become more important as the complexity of medical products has increased, and that coordination can help the centers share leading practices to address these issues. For example, officials said that collaboration could help the centers develop more effective clinical trials, improve their decision-making, and improve the quality of evidence and clarity of guidance. For these issues, FDA officials said that they continue to strategically plan across the centers without a written document specifically for medical products by using other planning documents. Although they noted that the agency’s resources have been better spent working toward goals in existing plans, rather than putting together a new strategic plan specific to medical product oversight, they indicated that more formal planning in the future may be useful as resources become available. FDA officials said that they did not structure the SIMP as a strategic plan, because they thought it would be duplicative of other FDA strategic plans; however, we found that none of these other plans comprehensively describes FDA’s long-term plan for addressing key issues amongst the centers, as summarized below: FDA has an overarching strategic priorities document that includes strategic goals and objectives for medical product activities. This document describes a broad level of activities, but does not specifically discuss strategies across the centers. For example, one FDA goal is partially aimed at improving coordination within FDA, and the agency also describes some activities that may require the centers’ collaboration, such as developing comprehensive regulatory approaches for integrating approval and compliance functions. FDA officials said that they use the annual budget process as an opportunity for strategic planning. While FDA’s fiscal year 2017 budget justification describes planned activities specific to each center, its planning across the centers is limited to a few specific initiatives, such as developing scientific workshops to advance the development of pediatric therapeutic products. FDA officials identified strategic plans for specific initiatives that involve each center, such as FDA’s strategic plan for advancing regulatory science and FDA’s strategic plan for information technology. However, we recently reported on FDA’s strategic plan for information technology, finding a lack of goals and performance measures for determining whether its implementation is successful in supporting FDA’s mission. Each center also has its own strategic plan, but they differ in structure and content. While the center-specific plans include activities, goals, and objectives relevant to each individual center, they do not describe crosscutting issues or include plans for collaboration across the centers to address them. Officials from each center said that they also relied on performance measures in other documents, such as user fee commitments, to plan their activities and measure their performance. The growing importance of areas that cut across medical product centers highlights the importance of FDA’s strategic planning for medical product oversight. The absence of a documented long-term plan for medical product oversight may hinder FDA’s efforts to address emerging issues that require center collaboration, such as access to quality data and developing requirements for combination products. Also, the absence of a documented strategy is inconsistent with leading practices for strategic planning based on prior GAO work. These practices indicate that formal strategic planning is needed for medical products by identifying crosscutting issues and ensuring that collaborative center goals, measures, and activities are effectively integrated with FDA’s overall organizational mission and goals. Documenting a strategic plan for medical products—whether it occurs in a freestanding document or as part of existing documents the centers are already using—would also enable FDA to oversee its activities in a consistent and transparent manner, help the agency communicate its priorities to key stakeholders, and help align its activities to support mission-related outcomes. In FDA’s SIMP, the agency compiled 30 efficiency initiatives under three different themes and included 19 different types of workforce development initiatives for each center on training, recruitment, and retention. FDA had fully implemented about a third of the efficiency initiatives and most of the workforce development initiatives prior to the SIMP’s issuance in 2013. We found that FDA grouped the SIMP’s 30 efficiency initiatives into three themes: (1) business modernization, (2) process improvement, and (3) smarter regulation. (See appendix I for a full description of each efficiency initiative.) Under business modernization, FDA included 3 initiatives on each center’s workload measurement activities, 3 initiatives focused on data standards efforts, and 2 initiatives specific to staff location and ability to use electronic functions to complete their work. For the initiatives on the centers’ workload measurement activities, the centers each updated their time reporting systems to record user fee activities, which employees are required to do in 2-week increments four times during the fiscal year. Under process improvement, FDA included 11 efficiency initiatives specific to an agency-wide or center-specific need. CBER included initiatives to improve its review mechanisms and move to more electronic processes. CDER included efforts to streamline processes for its formal communication mechanisms with the industry and manufacturing facilities. CDRH included pilot programs for certain device types and manufacturers, and a postmarket program for identifying new device risks. Under smarter regulation, FDA included 11 initiatives—8 initiatives that stem from each user fee program, as well as 3 initiatives for medical devices that respond to other statutory requirements. The majority of the 11 initiatives are focused on the premarket review process of medical products. Specifically, the initiatives are related to improving communication between FDA and the industry, providing additional guidance to industry for how FDA will assess medical products, providing its plans for health information technology, and defining FDA’s approach to and requirements for facilities that manufacture drug products. The SIMP notes that these three themes reflect the strategic goals and priorities that the medical product centers are all pursuing to improve efficiency. FDA officials further explained that the three themes helped to connect seemingly unrelated center-specific and user fee program responsibilities and initiatives presented in the SIMP. We found that FDA fully implemented about a third of the 30 efficiency initiatives within the 12 to 18 months prior to the SIMP’s issuance in July 2013, and implemented another half of the initiatives since then. As of March 2016, the remaining initiatives had yet to be fully implemented, the majority of which are related to developing data standards for electronic submissions or efforts to move to an electronic review process. For example, CDRH specified that its initiative to establish a unique device identification system started with the highest risk medical devices and will be fully implemented in 2020 once all medical devices have identifiers in electronic health records. (See table 1.) We found that FDA included 19 workforce development initiatives in the SIMP—11 training initiatives, 7 recruitment initiatives, and 1 retention initiative. (See appendix II for a full description of each workforce development initiative.) FDA officials told us that the majority of the workforce development initiatives are specific to each center’s activities, reflecting differences in program responsibilities and procedures. Industry officials we spoke with emphasized the importance of recruitment, retention, and training efforts on the agency’s ability to meet user fee commitments. (For more information on the size and characteristics of FDA’s overall and center-specific workforce, see appendix III.) The 11 training initiatives FDA included in the SIMP describe multiple training courses or programs. As part of these initiatives, FDA included programs for the new reviewer trainings offered by each of the medical product centers and initiatives covering training for each of the user fee programs, which may be taken by staff from multiple centers. The initiatives also included training courses dedicated to specific topics for each medical product center. For example, CBER included training courses covering medical device review and project management, and CDRH included two leadership experience programs for future and current managers. The first CDRH program gives certain staff an opportunity to explore a supervisory career path; the second is to help staff in management positions learn about CDRH’s management competencies and satisfy federal supervisory training requirements. We found that the seven recruitment initiatives FDA included in the SIMP are intended to streamline recruitment processes at both the agency and center levels. For example, CDER included initiatives to manage and fill vacancies in executive-level positions and critical occupations, such as chemists and project managers. Each of the centers also included initiatives to improve outreach to potential job candidates, such as through job fairs, alumni networks, and institutional partnerships. For retention, we found that FDA included a single initiative in the SIMP— CDRH’s efforts to address the center’s high attrition rate by reducing individual workloads, decreasing staff-to-manager ratios, and providing employees with a better work environment. To reduce staff workloads and decrease staff-to-manager ratios, CDRH increased the number of review and management staff. To provide a better work environment, CDRH developed and improved performance evaluation tools and employee recognition processes. For example, CDRH created a resource guide to educate staff on the center’s performance management system. FDA did not include retention initiatives for CBER or CDER in the SIMP; however, officials from both centers told us that each center uses some retention tools and processes. Among the 19 workforce development initiatives included in the SIMP, 15 initiatives were implemented prior to the plan’s issuance in July 2013. By March 2016, FDA implemented 2 additional workforce development initiatives, bringing the total to 17 initiatives. Of the remaining 2 initiatives, 1 is still being implemented. CDRH is in the process of reducing staff workloads as part of the center’s retention initiative—an activity related to hiring plans that are to be phased in through fiscal year 2017. The final one, CDER’s alumni network initiative, was terminated. CDER planned to pilot the initiative in four of its offices beginning in 2013, but it was never piloted or implemented due to a lack of employee activity in alumni associations. (See table 2.) We found that FDA had already established or has plans to establish formal and informal mechanisms to assess the effectiveness of just over half of the 30 efficiency initiatives in the SIMP. For the SIMP’s workforce development initiatives, FDA identified mechanisms to assess most of the 19 initiatives, and each center’s approach to assess training is different. FDA stated that the agency had assessed or has plans to assess just over half of the 30 efficiency initiatives for effectiveness, although these plans are generally not described in the SIMP. In its plan, FDA identified formal measures of effectiveness for 3 initiatives, each of which is based on a MDUFA or PDUFA commitment, but does not specify any additional measures in the plan itself for the remaining 27 initiatives. (See table 3.) However, we found that FDA has formal or informal measures that do not appear in the SIMP for a majority of these initiatives. For five initiatives, FDA officials identified formal measures of effectiveness that were not described in the SIMP. The officials explained that these initiatives are assessed through periodic user fee program reports or center strategic goals. For example CDER officials told us that the GDUFA initiative on commitments, complete review, and easily correctable deficiencies is assessed against the user fee commitments. For example, FDA committed to review and act on 90 percent of complete, electronic abbreviated new drug applications within 10 months after the date of submission. FDA does not have to meet some of these commitments until 2017, but the agency indicated that it faces challenges meeting them due to a large backlog of applications. CDRH officials told us that they assessed the investigational device exemption decision program using center-specific strategic goals related to reducing the number of review cycles needed before full approval, and reducing the overall median time to full approval. CDRH met each of these goals in fiscal year 2015. For nine initiatives, officials from each center described efforts they took to informally examine effectiveness. For example, CBER uses staff feedback to assess implementation of its electronic review templates, and incorporates revisions as appropriate. For CDRH’s initiative to establish a unique device identification system, officials said they track certain metrics, such as numbers of vendors certified to participate in the program and visits to the program’s website. FDA officials told us that, for the remaining 13 effectiveness initiatives in the SIMP, they are either exploring effectiveness measures or do not have plans to measure effectiveness. In some cases, officials described ways in which effectiveness could be measured or efforts to develop assessments. For example, CDRH officials told us that they did not currently have, but were exploring, ways to measure the impact of its signal management program initiative through industry responses or actions taken. In other instances, such as with CBER’s two initiatives on improving its managed review process tool, officials indicated that they were unclear about the best way to measure effectiveness. Additionally, FDA does not have current plans to measure effectiveness of some initiatives and officials noted that such measurement would be either unnecessary or impractical. For example, FDA is not measuring effectiveness for the PDUFA meeting minutes initiative, because officials said it would be a challenge to survey sponsors and the agency wants to be selective about choosing that option. FDA identified mechanisms to assess the effectiveness of 12 of the 19 workforce development initiatives. Specifically, the agency identified mechanisms to assess 4 of 7 recruitment initiatives, the 1 retention initiative, and 7 of 11 training initiatives. In the SIMP, FDA generally did not describe assessments for specific initiatives, but rather described each user fee program’s hiring and training commitments as broad measures of the agency’s workforce development efforts. For example, in order to reach the committed GDUFA level of 923 full-time equivalent staff by the end of fiscal year 2015, FDA committed to hire and then train at least 25 percent of staff in fiscal year 2013 and 50 percent in fiscal year 2014. FDA reported that it met this commitment by October 2014, 11 months ahead of schedule. (As previously noted, appendix III provides additional information on the size and characteristics of FDA’s overall and center-specific workforce.) FDA officials described the mechanisms in place to assess the effectiveness of 4 of the 7 recruitment initiatives described in the SIMP. For the two that are FDA-wide recruitment initiatives, FDA uses agency- and department-wide tools to measure the overall effectiveness. Specifically, FDA developed the FDA Accelerated Staffing Track 80-day hiring metric in early fiscal year 2015 to measure the time it takes to hire a new employee once the need is identified. However, officials said that data quality and data entry issues limited the accuracy and validity of the data available at the time of our review. In addition, FDA uses HHS personnel information systems to track monthly and quarterly hire and separation data for each medical product center. Officials also described performance metrics that CBER and CDRH track to assess effectiveness for two center-specific recruitment initiatives. For CBER’s comprehensive recruitment strategy, the center tracks the number of resumes received and hires from targeted populations. For example, CBER hired four veteran; five minority; and 31 science, technology, engineering, and mathematics candidates during fiscal year 2015. For CDRH’s initiative on strategic communication and outreach for recruitment, the center uses monthly reports to track the number of applicants responding to the center’s job postings, including data on the number of applicants that apply to and are eligible for each position. For the three recruitment initiatives that do not currently have mechanisms to assess effectiveness, CDER officials described the center’s current plans or what it had already done. For one initiative, officials said that the center is developing an automated project management and tracking tool. Officials expect that the tool will be implemented in spring 2016. For another initiative, CDER met its overall hiring objectives, but did not measure the number of selections made as a result of the initiative itself. Finally, CDER’s alumni network initiative was never implemented, and thus FDA did not put in a place a mechanism to assess its effectiveness. To assess the effectiveness of the one retention initiative in the SIMP, CDRH officials told us they measure the number of full-time equivalent staff supporting MDUFA activities, changes in staff-to-manager ratios, and survey results. CDRH’s total full-time equivalent staff supporting MDUFA increased from 1,133 in fiscal year 2013 to 1,293 in fiscal year 2015. At the same time, CDRH reduced the staff-to-manager ratio in its two offices with medical device review responsibilities. CDRH also analyzes changes in federal Employee Viewpoint Survey responses to assess its efforts to provide a better workplace for its employees. From 2011 to 2014, CDRH observed positive changes for three of six critical indicators it identified for providing recognition and all six critical indicators it identified for performance evaluation. Of the 11 training initiatives, CBER, CDER, and CDRH officials each identified mechanisms to assess the effectiveness of 7 initiatives. Specifically, each center indicated that it uses participant surveys to assess effectiveness. CBER also delivers a test at the conclusion of some, but not all, of the programs included in its training initiative.Furthermore, as described in the SIMP, CDRH conducts an audit process for its Reviewer Certification Program through which new reviewers are evaluated by an experienced reviewer. During the audit process, new reviewers are rated against six criteria, including the appropriate use of guidance and strength of final review decision analyses. The four remaining training initiatives described in the SIMP were related to each user fee program and the centers use different approaches to assess the extent to which all reviewers required to complete training have done so. CBER and CDER track the names of staff who register for training and do not measure the number of medical product reviewers required to complete the trainings. CBER and CDER officials said that FDA was not required to report on training completion rates, and they assume that required staff completed user fee training, because it is made available in different settings, such as CBER’s review management updates and CDER’s new employee orientation. For example, CDER officials told us that 99 percent of the staff hired under the GDUFA commitment had completed training, as all hired staff take mandatory online training once hired. Training completion rates are not included in GDUFA performance reports. In contrast, CDRH measures user fee training completion rates among its required staff and reports on these rates in MDUFA quarterly performance reports, as required by their user fee commitments. CDRH reported a 99 percent staff completion rate among its review staff required to complete MDUFA training. Center officials did not identify any mechanisms to assess how effective participants were in applying the information learned during these user fee trainings (known as training comprehension). CBER officials said its user fee trainings were delivered and recorded in special training sessions, such as in monthly review management updates, and that these trainings do not have mechanisms to assess comprehension. CDER officials were unable to show that staff who took user fee trainings were given post-completion tests. CDRH officials told us that a post- completion test was not disseminated for the initial MDUFA trainings. However, CDRH has since incorporated the user fee trainings into the center’s Reviewer Certification Program, which has multiple mechanisms for assessment. Emerging issues—including increasingly complex medical products such as combination products, the need for integrated information systems, and the increased hiring demands for specific scientific knowledge—go beyond the expertise of a single medical product center and highlight the growing importance of strategic planning across medical products. Advances involving new diagnostic tools, treatments, and cures require collaboration in order to be successful. However, FDA has faced longstanding challenges in carrying out the many responsibilities necessary for the oversight of medical products. While FDA engaged each of the medical product centers in the development of the SIMP, this narrowly focused plan is not used by the agency or centers. Moreover, it highlights gaps in the agency’s management across FDA’s medical product centers by not fully linking its performance goals to its general goals and objectives, and having limited information on implementation time frames. While FDA has various other strategic planning documents for medical product oversight, these documents also do not set a long- term strategy for the centers, because they are focused on narrower issues or do not have details specific to center-level collaboration. Using leading practices identified as essential for strategic planning can help ensure the agency is prepared to address challenges requiring coordination across the centers in a consistent and transparent manner. Documenting measurable goals, objectives, and a long-term strategy for areas resulting from this planning—whether it is through a freestanding document or as part of existing documents—can help the agency ensure its priorities are communicated among key stakeholders, even in times of leadership turnover. To ensure that FDA can effectively coordinate and integrate its medical product centers’ programs and emerging issues, we recommend that the Secretary of Health and Human Services direct the Commissioner of FDA to engage in a strategic planning process to identify challenges that cut across the medical product centers and document how it will achieve measurable goals and objectives in these areas. We provided a draft of this report to HHS. The agency agreed with our recommendation and provided written comments, which are reprinted in appendix IV. In its written comments, HHS described the context surrounding the development of the SIMP and the progress FDA has made regarding its medical product review activities under its four user fee programs. It noted the importance of coordinating and integrating the activities that are common among FDA’s medical product centers. In agreeing with our recommendation, HHS indicated that FDA has already started a process to identify key crosscutting themes for the medical products centers, which it will then use to develop an overarching strategic planning framework to guide the work of these centers. We encourage FDA to use leading practices to ensure this framework has measurable goals and objectives. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from its date. At that time, we will send copies to the Secretary of Health and Human Services. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Table 4 shows each efficiency initiative that the Food and Drug Administration (FDA) included in its strategic integrated management plan. FDA described 30 efficiency initiatives in its plan, including those specific to a medical product center or to a user fee program. FDA also grouped the initiatives into three themes: (1) business modernization, (2) process improvement, and (3) smarter regulation. Table 5 shows each workforce development initiative the Food and Drug Administration (FDA) included in its strategic integrated management plan. FDA described 19 workforce development initiatives in its plan specific to recruitment, retention, or training. We analyzed Food and Drug Administration (FDA) data on the agency’s workforce population and attrition for fiscal years 2012 to 2015. Our analysis includes detail on the three medical product centers: the Center for Biologics Evaluation and Research (CBER), the Center for Drug Evaluation and Research (CDER), and the Center for Devices and Radiological Health (CDRH). FDA’s total workforce grew from 16,716 employees in 2012 to 19,043 employees in 2015—a 14 percent increase. FDA measures year-to-year changes in its total workforce by subtracting the employee losses from the employee gains of permanent and non- permanent staff. Figure 2 shows the number of medical product center employees—permanent and non-permanent—for each fiscal year. Some losses and gains reported by the centers are due to employees that transferred within the agency, such as from one center to another. Tables 6, 7, and 8 show information on transfers within FDA for each medical product center in fiscal years 2012 to 2015. FDA also tracks the percentage of retirement-eligible staff. In fiscal year 2015, 12.4 percent of FDA’s overall permanent workforce was retirement- eligible. In the same fiscal year, the retirement eligibility for each medical product center was 15.9 percent for CBER, 10.9 percent for CDER, and 11.8 percent for CDRH. Figure 3 shows the FDA-wide and center-specific attrition rates from fiscal year 2012 to 2015. FDA calculates attrition rates by dividing the number of voluntary personnel losses by the average number of employees for each fiscal year. Voluntary personnel losses include retirements, resignations, and employees who transfer externally to another federal agency or internally to a different center or office within FDA. The following tables show the number of employees, personnel gains, and attrition rates for FDA and each medical product center. The tables also include information on mission-critical occupations, which may vary by center. In addition to the contact named above, William Hadley, Assistant Director; George Bogart; Jennel Lockley; Drew Long; Matt Lowney; Dan Powers; and E. Jane Whipple made key contributions to this report.
FDA—an agency within the Department of Health and Human Services (HHS)—has faced challenges in carrying out its responsibilities to ensure the safety and efficacy of medical products sold in the United States. In 2012, Congress required FDA to develop a SIMP for the three centers overseeing medical products that identifies initiatives for improving efficiency, initiatives for workforce development, and measures for assessing the progress of these initiatives. FDA issued the SIMP in July 2013. GAO was asked to examine FDA's implementation of the SIMP. In this report, GAO (1) evaluates the extent to which the SIMP serves as a strategic planning document, (2) describes the types of plan initiatives, and (3) describes the mechanisms FDA has to evaluate the effectiveness of its plan initiatives. GAO analyzed FDA documents and spoke to FDA officials to assess the SIMP's development and use, along with the implementation status and evaluation mechanisms used for the SIMP's initiatives. GAO also assessed FDA's plan against leading practices for strategic planning. Finally, GAO analyzed FDA workforce data on hiring and attrition for fiscal years 2012 to 2015. The Food and Drug Administration (FDA) developed a strategic integrated management plan (SIMP) for its three centers that oversee medical products (biologics, drugs, and medical devices); however, GAO found that the plan does not incorporate leading practices for strategic planning or document a comprehensive strategy for the centers. FDA officials explained that circumstances at the time of the SIMP's development, including leadership gaps, limited FDA's ability to structure the plan into an effective strategic planning document. While officials said they use a variety of other key documents for strategic planning—such as agency-level and initiative-specific plans—these other plans also do not describe a long-term strategy for addressing key issues that cut across medical product centers. For example, these other FDA documents do not describe the agency's plans for collaboration between the centers that could benefit certain initiatives, improve their decision-making, and improve the quality of evidence and clarity of guidance. FDA officials acknowledged the growing need for strategic planning across the medical product centers to improve center collaboration and address emerging issues. The absence of a comprehensive long-term plan for medical product oversight may hinder FDA's efforts to address emerging issues that require center collaboration, such as access to quality data. Fully documenting such a strategy, either in a separate plan or through existing documents, would help the agency identify measurable goals and objectives for the centers that align with its mission and help communicate its priorities to key stakeholders. In the SIMP, FDA compiled mostly preexisting initiatives to improve the efficiency of each center's activities and develop its workforce. GAO found that for improving efficiency, FDA selected 30 initiatives that it grouped into three different themes—smarter regulation, process improvement, and business modernization. FDA had fully implemented a third of the initiatives prior to the SIMP's issuance in 2013; another half were implemented by March 2016. As of this date, the remaining initiatives had yet to be fully implemented. For workforce development, FDA included 19 recruitment, retention, and training initiatives, which generally reflected differences in center activities. FDA implemented 15 initiatives prior to the SIMP's issuance and 2 additional initiatives since then. Of the remaining initiatives, 1 was terminated and, as of March 2016, FDA was in the process of implementing the other initiative. Although not generally reported in the SIMP, FDA officials identified mechanisms to assess the effectiveness of the majority of the initiatives included in the plan. Of the 30 efficiency initiatives, FDA officials identified 8 that have formal evaluations (such as third-party assessments) and 9 that are assessed informally (such as by gathering feedback). For the remaining 13, officials said they are either exploring effectiveness measures or have no plans to assess them because they consider it to be unnecessary or impractical. FDA identified mechanisms to assess 12 of the 19 workforce development initiatives, including through recruitment performance metrics and surveys of training participants. For 4 initiatives, the centers each use different approaches to assess training. For the remaining 3 initiatives, FDA either is developing a mechanism or described past assessment activities. GAO recommended that the Secretary of Health and Human Services direct FDA to engage in a strategic planning process to identify challenges that cut across the medical product centers, and document how it will achieve measurable goals and objectives in these areas. HHS agreed with the recommendation.
For the past several years, concerns about the cost of operating and maintaining federal recreation sites within the federal land management agencies have led the Congress to provide a significant new source of funds. This additional source of funding—the Recreational Fee Demonstration Program—was authorized in 1996. The fee demonstration program authorized the Bureau of Land Management, Fish and Wildlife Service, National Park Service, and the Forest Service to experiment with new ways to administer existing fee revenues and to establish new recreation entrance and user fees. The current authorization for the program expires December 31, 2005. Previously, all sites collecting entrance and user fees deposited the revenue into a special U.S. Treasury account to be used for certain purposes, including resource protection and maintenance activities, and funds in this account only became available through congressional appropriations. The fee demonstration program currently allows agencies to maintain fee revenues in special U.S. Treasury accounts for use without further appropriation: 80 percent of the fees are maintained in an account for use at the site and the remaining 20 percent are maintained in another account for use on an agency-wide basis. As a result, these revenues have yielded substantial benefits for local recreation sites by funding significant on-the-ground improvements. From the inception of the Recreational Fee Demonstration Program, the four participating agencies have collected over $1 billion in recreation fees from the public. The Department of the Interior and the Department of Agriculture’s most recent budget requests indicate that the agencies expect to collect $138 million and $46 million, respectively, from the fee demonstration program in fiscal year 2005. H.R. 3283, as proposed, would provide a permanent source of revenue for federal land management agencies to use to, among other things, help address the backlog in repair and maintenance of federal facilities and infrastructure. One of the principal uses of the revenues generated under the existing Recreational Fee Demonstration Program is for participating agencies to reduce their respective maintenance backlogs. The Department of the Interior owns, builds, purchases, and contracts services for such assets as visitor centers, roads, bridges, dams, and reservoirs, many of which are deteriorating and in need of repair or maintenance. We have identified Interior’s land management agencies inability to reduce their maintenance backlogs as a major management challenge. According to the Department of the Interior’s latest estimates, the deferred maintenance backlog for its participating agencies ranged from about $5.1 billion to $8.3 billion. Table 1 shows the Department’s estimate of deferred maintenance for its agencies participating in the Recreational Fee Demonstration Program. Of the current participating agencies within Interior, the National Park Service has the largest estimated maintenance backlog—ranging from $4 to nearly $7 billion. As we have previously reported, the Park Service’s problems with maintaining its facilities have steadily worsened in part because the agency lacks accurate data on the facilities that need to be maintained or on their condition. As a result, the Park Service cannot effectively determine its maintenance needs, the amount of funding needed to address them, or what progress, if any, it has made in closing the maintenance gap. Although the Park Service has used some of the revenues generated from the fee demonstration program to address its high-priority maintenance needs, without accurate and reliable data, it cannot demonstrate the effect of fee demonstration revenues in improving the maintenance of its facilities. The Park Service has acknowledged the problems associated with not having an accurate and reliable estimate of its maintenance needs and promised to develop an asset management process that, when operable, should provide a systematic method for documenting deferred maintenance needs and tracking progress in reducing the amount of deferred maintenance. Furthermore, the new process should enable the agency to develop (1) a reliable inventory of its assets, (2) a process for reporting on the condition of each asset, and (3) a system-wide methodology for estimating its deferred maintenance costs. In 2002, we identified some areas that the agency needed to address in order to improve the performance of the process, including the need to develop cost and schedules for completing the implementation of the process, better coordinating the tracking of the process among Park Service headquarters units to avoid duplication of effort within the agency, and better definition of its approach to determine the condition of its assets and how much the assessments will cost. In our last testimony on this issue before this Subcommittee in September 2003, we stated that the complete implementation of the new process would not occur until fiscal year 2006, but that the agency had completed, or nearly completed, a number of substantial and important steps to improve the process. The two other Interior agencies participating in the program—the Fish and Wildlife Service and the Bureau of Land Management also report deferred maintenance backlogs of about $1 billion and $330,000, respectively. We do not have any information at this time on the effectiveness of the program in reducing these backlogs. The Forest Service also has an estimated $8 billion maintenance backlog most of which is needed to maintain forest roads and bridges. In September 2003, we reported that the Forest Service (like the Park Service) had no effective means for measuring how much of the fee demonstration revenues it had spent on deferred maintenance or the impact that the fee program had had on reducing its deferred maintenance needs. Although the Forest Service has recognized the significance of its deferred maintenance problem, it does not have a systematic method for compiling the information needed to provide a reliable estimate of its deferred maintenance needs. Furthermore, the agency has not developed a process to track deferred maintenance expenditures from fee demonstration revenues. As a result, even if the agency knew how much fee revenue it spent on deferred maintenance, it could not determine the extent to which these revenues had reduced its overall deferred maintenance needs. Forest Service officials provided several reasons why the agency had not developed a process to track deferred maintenance expenditures from the demonstration revenues. First, they said that the agency chose to use its fee demonstration revenue to improve and enhance on-site visitor services rather than to develop and implement a system for tracking deferred maintenance spending. Second, the agency was not required to measure the impact of fee revenues on deferred maintenance. Finally, because the fee demonstration program was temporary, agency officials had concerns about developing a process for tracking deferred maintenance, not knowing if the program would subsequently be made permanent. H.R. 3283 would provide participating agencies with a permanent source of funds to supplement existing appropriations and to better address maintenance backlogs. Furthermore, by making the program permanent, H.R. 3283 could provide participating agencies like the Forest Service with an incentive to develop a system to track their deferred maintenance backlogs. The existing fee demonstration program requires federal land management agencies to maintain at least 80 percent of the fee revenues for use on-site. In a 1998 report, we suggested that, in order to provide greater opportunities to address high priority needs of the agencies, the Congress consider modifying the current requirement to grant agencies greater flexibility in using fee revenues. H.R. 3283 provides the agencies with flexibility to reduce the percentage of revenues spent on-site down to 60 percent. We also reported that the requirement that at least 80 percent of the revenues be maintained for use at the collection site may inadvertently create funding imbalances between sites and that some heavily visited sites may reach a point where they have more revenues than they need for their projects, while other sites would still fall short. In 1999, we testified that some demonstration sites were generating so much revenue as to raise questions about their long-term ability to spend these revenues on high-priority items. In contrast, we warned that sites outside the demonstration program, as well as demonstration sites that did not collect as much in fee revenues, may have high-priority needs that remained unmet. As a result, some of the agencies’ highest-priority needs might not be addressed. Our testimony indicated that, at many sites in the demonstration program, the increased fee revenues amounted to 20 percent or more of the sites’ annual operating budgets, allowing such sites to address past unmet needs in maintenance, resource protection, and visitor services. While these sites could address their needs within a few years, the 80-percent requirement could, over time, preclude the agencies from redistributing fee revenues to meet more pressing needs at other sites. Our November 2001 report confirmed that such imbalances had begun to occur. Officials from the land management agencies acknowledged that some heavily visited sites with large fee revenues may eventually collect more revenue than they need to address their priorities, while other lower-revenue generating sites may have limited or no fee revenues to meet their needs. To address this imbalance, we suggested that the Congress consider modifying the current requirement that 80 percent of fee revenue be maintained for use by the sites generating the revenues to allow for greater flexibility in using fee revenues. H.R. 3283 would still generally require agencies to maintain at least 80 percent of fee revenues for use on-site. However, if the Secretary of the Interior determined that the revenues collected at a site exceeded the reasonable needs of the unit for which expenditures may be made for that fiscal year, under H.R. 3283 the Secretary could then reduce the percentage of on-site expenditures to 60 percent and transfer the remainder to meet other priority needs across the agency. The need for flexibility in transferring revenue must also be balanced against the necessity of keeping sufficient funds on-site to maintain incentives at fee-collecting units and to maintain the support of the visitors. Such a balance is of particular concern to the Forest Service, which has identified that visitors generally support the program so long as the fees are used on-site and they can see improvements to the site where they pay fees. Accordingly, under the existing fee demonstration program, the Forest Service has committed to retaining 90 to 100 percent of the fees on-site. As such, H.R. 3283 would not likely change the Forest Service’s use of collected fees. However, it would provide the Forest Service, as well as the other agencies, with the flexibility to balance the need to provide incentives at fee collecting sites and support of visitors against transferring revenues to other sites. The legislative history of the fee demonstration program places an emphasis on participating agency collaboration to minimize or eliminate confusion for visitors where multiple fees could be charged to visit recreation sites in the same area. Our prior work has pointed to the need for more effective coordination and cooperation among the agencies to better serve visitors by making the payment of fees more convenient and equitable while at the same time, reducing visitor confusion about similar or multiple fees being charged at nearby or adjacent federal recreation sites. For example, sites do not consistently accept agency and interagency passes, resulting in visitor confusion and, in some cases, overlapping or duplicative fees for the same or similar activities. H.R. 3283 would allow for improved service to visitors by coordinating federal agency fee-collection activities. First, the act would standardize the types of fees that the federal land management agencies use. Second, it would create a single national pass that would provide visitors access to recreation sites managed by different agencies. Third, it would allow for the coordination of fees on a regional level for access to multiple nearby sites. In November 2001, we reported that agencies had not pursued opportunities to coordinate their fees better among their own sites, with other agencies, or with other nearby, nonfederal recreational sites. As a result, visitors often had to pay fees that were sometimes overlapping, duplicative, or confusing. Limited fee coordination by the four agencies has permitted confusing fee situations to persist. At some sites, an entrance fee may be charged for one activity whereas a user fee may be charged for essentially the same activity at a nearby site. For example, visitors who entered either Olympic National Park or the Olympic National Forest in Washington state for day hiking are engaged in the same recreational activity—obtaining general access to federal lands—but were charged distinct entrance and user fees. For a 1-day hike in Olympic National Park, users paid a $10 per-vehicle entry fee (good for 1 week), whereas hikers using trailheads in Olympic National Forest were charged a daily user fee of $5 per vehicle for trailhead parking. Also, holders of the interagency Golden Eagle Passport—a $65 nationwide pass that provides access to all federal recreation sites that charge entrance fees—could use the pass to enter Olympic National Park, but had to pay the Forest Service’s trailhead parking fee because the fee for the pass covers only entrance fees and not a user fees. However, the two agencies now allow holders of the Golden Eagle Passport to use it for trailhead parking at Olympic National Forest. Similarly, confusing and inconsistent fee situations also occur at similar types of sites within the same agency. For example, visitors to some Park Service national historic sites, such as the San Juan National Historic Site in Puerto Rico, pay a user fee and have access to all amenities at the sites, such as historic buildings. However, other Park Service historic sites, such as the Roosevelt/Vanderbilt Complex in New York State, charge no user fees, but tours of the primary residences require the payment of entrance fees. Visitors in possession of an annual pass that cover entrance fees, such as the National Parks Pass, may be further confused that their annual entrance pass is sufficient for admission to a user fee site, such as the San Juan National Historic Site, but not sufficient to allow them to enter certain buildings on the Roosevelt/Vanderbilt Complex, which charge entrance fees. H.R. 3283 would streamline the recreational fee program by providing a standard fee structure across federal land management agencies using a 3- tiered fee structure: a basic recreation fee, an expanded recreation fee, and a special recreation permit fee. H.R. 3283 establishes several areas where a basic recreation fee may be charged. For example, the basic recreation fee offers access to, among other areas, National Park System units, National Conservation Areas, and National Recreation Areas. Expanded recreation fees are charged either in addition to the basic recreation fee or by itself when the visitor uses additional facilities or services, such as a developed campground or an equipment rental. A special recreation permit is charged when the visitor participates in an activity such as a commercial tour, competitive event, or an outfitting or guiding activity. In November 2001 we reported another example of an interagency issue that needed to be addressed—the inconsistency and confusion surrounding the acceptance and use of the $65 Golden Eagle Passport. The annual pass provides visitors with unlimited access to federal recreation sites that charge an entrance fee. However, many sites do not charge entrance fees to gain access to a site and instead charge a user fee. For example, Yellowstone National Park, Acadia National Park, and the Eisenhower National Historic Site charge entrance fees. But sites like Wind Cave National Park charge user fees for general access. If user fees are charged in lieu of entrance fees, the Golden Eagle Passport is generally not accepted even though, to the visitor with a Golden Eagle Passport, there is no practical difference. Further exacerbating the public’s confusion over payment of use or entrance fees was the implementation of the Park Service’s single-agency National Parks Pass in April 2000. This $50 pass admits the holder, spouse, children, and parents to all National Park Service sites that charge an entrance fee for a full year. However, the Parks Pass does not admit the cardholder to the Park Service sites that charge a user fee, nor is it accepted for admittance to other sites in the Forest Service and in the Department of the Interior, including the Fish and Wildlife Service sites. H.R. 3283 would eliminate the current national passes and replace them with one federal lands pass—called the “America the Beautiful—the National Parks and Federal Recreation Lands Pass”—for use at any site of a federal land management agency that charges a basic recreation fee. The act also calls for the Secretaries of Agriculture and the Interior to jointly establish the National Parks and Federal Recreation Lands Pass and to jointly issue guidelines on the administration of the pass. In addition, it requires that the Secretaries develop guidelines for establishing or changing fees and that these guidelines, among other things, would require federal land management agencies to coordinate with each other to the extent practicable when establishing or changing fees. H.R. 3283 would also provide local site managers the opportunity to coordinate and develop regional passes to reduce visitor confusion over access to adjacent sites managed by different agencies. When authorizing the demonstration program, the Congress called upon the agencies to coordinate multiple or overlapping fees. We reported in 1999 that the agencies were not taking advantage of this flexibility. For example, the Park Service and the Fish and Wildlife Service manage sites that share a common border on the same island in Maryland and Virginia—Assateague Island National Seashore and Chincoteague National Wildlife Refuge. When the agencies selected the two sites for the demonstration program, they decided to charge separate entrance fees. However, as we reported in 2001, the managers at these sites developed a reciprocal fee arrangement whereby each site accepted the fee paid at the other site to better accommodate the visitors. Resolving situations in which inconsistent and overlapping fees are charged for similar recreation activities would offer visitors a rational and consistent fee program. We stated that further coordination among the agencies participating in the fee demonstration program could reduce the confusion for visitors. We reported that demonstration sites may be reluctant to coordinate on fees partly because the program’s incentives are geared towards increasing their revenues. Because joint fee arrangements may potentially reduce revenues to specific sites, there may be a disincentive among these sites to coordinate. Nonetheless, we believe that the increase in service to the public might be worth a small reduction in revenues. Accordingly, we recommended that the Secretaries of Agriculture and the Interior direct the heads of the participating agencies to improve their service to visitors by better coordinating their fee collection activities under the Recreational Fee Demonstration Program. In response, in 2002, the Departments of the Interior and Agriculture formed the Interagency Recreational Fee Leadership Council to facilitate coordination and consistency among the agencies on recreation fee policies. We also recommended that the agencies approach such an analysis systematically, first by identifying other federal recreation areas close to each other and then, for each situation, determining whether a coordinated approach, such as a reciprocal fee arrangement, would better serve the visiting public. The agencies implemented this recommendation to a limited extent as evidenced by the reciprocal fee arrangement between Assateague Island National Seashore and Chincoteague National Wildlife Refuge. H.R. 3283 offers federal agencies the opportunity to develop regional passes to offer access to sites managed by different federal, state and local agencies. As we have reported in the past, for all four agencies to make improvements in interagency communication, coordination, and consistency for the program to become user-friendly, an effective mechanism is needed to ensure that interagency coordination occurs or to resolve interagency issues or disputes when they arise. Essentially, the fee demonstration program raises revenue for the participating sites to use for maintaining and improving the quality of visitor services and protecting the resources at federal recreation sites. The program has been successful in raising a significant amount of revenue. However, the agencies could enhance the quality of visitor services more by providing better overall management of the program. Several of the provisions in H.R. 3283 address many of the quality of service issues we have identified through our prior work and if the provisions are properly implemented these services should improve. While the fee demonstration program provides funds to increase the quality of the visitor experience and enhance the protection of resources by, among other things, addressing a backlog of needs for repair and maintenance, and to manage and protect resources, the program’s short and long-term success lies in the flexibility it provides agencies to spend revenues and the removal of any undesirable inequities that occur to ensure that the agencies’ highest priority needs are met. However, any changes to the program’s requirements should be balanced in such a way that fee-collecting sites would continue to have an incentive to collect fees and visitors who pay them will continue to support the program. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Subcommittee may have. For further information about this testimony, please contact me at (202) 512-3841. Doreen Feldman, Roy Judy, Jonathan McMurray, Patrick Sigl, Paul Staley, Amy Webbink, and Arvin Wu made key contributions to this statement. The following is a listing of related GAO products on recreation fees, deferred maintenance, and other related issues. Recreation Fees: Information on Forest Service Management of Revenue from the Fee Demonstration Program. GAO-03-1161T. Washington, D.C.: September 17, 2003. Recreation Fees: Information on Forest Service Management of Revenue from the Fee Demonstration Program. GAO-03-470. Washington, D.C.: April 25, 2003. Recreation Fees: Management Improvements Can Help the Demonstration Program Enhance Visitor Services. GAO-02-10. Washington, D.C.: November 26, 2001. Recreational Fee Demonstration Program Survey. GAO-02-88SP. Washington, D.C.: November 1, 2001. National Park Service: Recreational Fee Demonstration Program Spending Priorities. GAO/RCED-00-37R. Washington, D.C.: November 18, 1999. Recreation Fees: Demonstration Has Increased Revenues, but Impact on Park Service Backlog Is Uncertain. GAO/T-RCED-99-101. Washington, D.C.: March 3, 1999. Recreation Fees: Demonstration Program Successful in Raising Revenues but Could Be Improved. GAO/T-RCED-99-77. Washington, D.C.: February 4, 1999. Recreation Fees: Demonstration Fee Program Successful in Raising Revenues but Could Be Improved. GAO/RCED-99-7. Washington, D.C.: November 20, 1998. National Park Service: Efforts Underway to Address Its Maintenance Backlog. GAO-03-1177T. Washington, D.C.: September 27, 2003. National Park Service: Status of Agency Efforts to Address Its Maintenance Backlog. GAO-03-992T. Washington, D.C.: July 8, 2003. National Park Service: Status of Efforts to Develop Better Deferred Maintenance Data. GAO-02-568R. Washington, D.C.: April 12, 2002. National Park Service: Efforts to Identify and Manage the Maintenance Backlog. GAO/RCED-98-143. Washington, D.C.: May 14, 1998. National Park Service: Maintenance Backlog Issues. GAO/T-RCED-98-61. Washington, D.C.: February 4, 1998. Deferred Maintenance Reporting: Challenges to Implementation. GAO/AIMD-98-42. Washington, D.C.: January 30, 1998. Major Management Challenges and Program Risks, Department of the Interior. GAO-03-104. Washington, D.C.: January 2003. Major Management Challenges and Program Risks, Department of the Interior. GAO-01-249. Washington, D.C.: January 2001. Park Service: Managing for Results Could Strengthen Accountability. GAO/RCED-97-125. Washington, D.C.: April 10, 1997. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 1996, the Congress authorized an experimental initiative called the Recreational Fee Demonstration Program that provides funds to increase the quality of visitor experience and enhance resource protection. Under the program, the Bureau of Land Management, Fish and Wildlife Service, and National Park Service--all within the Department of the Interior--and the Forest Service--within the U.S. Department of Agriculture--are authorized to establish, charge, collect, and use fees at a number of sites to, among other things, address a backlog of repair and maintenance needs. Also, sites may retain and use the fees they collect. The Congress is now considering, through H.R. 3283, whether to make the program permanent. Central to the debate is how effectively the agencies are using the revenues that they have collected. This testimony focuses on the potential effect of H.R. 3283 on the issues GAO raised previously in its work on the Recreational Fee Demonstration Program. Specifically, it examines the extent to which H.R. 3283 would affect (1) federal agencies' deferred maintenance programs, (2) the management and distribution of the revenue collected, and (3) interagency coordination on fee collection and use. H.R. 3283 would provide agencies with a permanent source of funds to better address their maintenance backlog, and by making the program permanent, the act would provide agencies incentive to develop a system to track their deferred maintenance backlogs. According to the Department of the Interior's latest estimates, the deferred maintenance backlog for the Interior agencies participating in the fee demonstration program ranges from $5.1 billion to $8.3 billion, with the Park Service alone accounting for an estimated $4 to $7 billion. Likewise, the Forest Service, the other participating agency, estimates its total deferred maintenance backlog to be about $8 billion. GAO's prior work on the Park Service's and Forest Service's backlog has demonstrated that neither agency has accurate and reliable information on its deferred maintenance needs and cannot determine how much of the fee demonstration revenues it spends on reducing its deferred maintenance needs. Furthermore, some agency officials have hesitated to divert resources to develop a process for tracking deferred maintenance because the fee demonstration program is temporary. H.R. 3283 would allow agencies to reduce the percentage of fee revenue used on-site down to 60 percent, thus providing the agencies with greater flexibility in how they use the revenues. Currently, the demonstration program requires federal land management agencies to maintain at least 80 percent of the collected fee revenues for use on-site. This requirement has helped some demonstration sites generate revenue in excess of their high-priority needs, but the high-priority needs at other sites, which did not collect as much in fee revenues, remained unmet. GAO has suggested that the Congress consider modifying the current 80-percent on-site spending requirement to provide agencies greater flexibility in using fee revenues. H.R. 3283 would standardize the types of fees federal land management agencies may use and creates a single national pass that provides visitors general access to a variety of recreation sites managed by different agencies and allows for the regional coordination of fees to access multiple nearby sites. GAO's prior reports have demonstrated the need for more effective coordination and cooperation among the agencies to better serve visitors by making the payment of fees more convenient and equitable while reducing visitor confusion about similar or multiple fees being charged at nearby or adjacent federal recreation sites.
DOE has a vast complex of sites across the nation dedicated to the nuclear weapons program. DOE largely ceased production of plutonium and enriched uranium by 1992, but the waste remains at the sites. Most of the tanks in which the waste is stored have already exceeded their design life. For example, many of Hanford’s and Savannah River’s tanks were built in the 1940s to 1960s and were designed to last 10-40 years. Leaks from some of these tanks were first detected at Hanford in 1956 and at Savannah River in 1959. Given the age and deteriorating condition of some of the tanks, there is concern that some of them will leak additional waste into the soil, where it may migrate to the water table and, in the case of the Hanford Site, to the Columbia River. Responsibility for the high-level waste produced at DOE facilities is governed primarily by federal laws, including the Atomic Energy Act of 1954. These laws established responsibility for the regulatory control of radioactive materials including DOE’s high-level waste and assigned the Nuclear Regulatory Commission (NRC) the function of licensing facilities that are expressly authorized for long-term storage of high-level radioactive waste generated by DOE. In addition, the Nuclear Waste Policy Act of 1982 defined high-level radioactive waste. Various other federal laws, including the Resource Conservation and Recovery Act of 1976, guide how DOE must carry out its cleanup program. The high-level waste cleanup program is under the leadership of the Assistant Secretary for Environmental Management. It involves consultation with a variety of stakeholders, including the Environmental Protection Agency, state environmental agencies where DOE sites are located, county and local governmental agencies, citizen groups, advisory groups, and Native American tribes. The waste in the tanks at the Hanford and Savannah River sites and the Idaho National Laboratory near Idaho Falls is a complex mixture of radioactive and hazardous components. DOE’s process for preparing it for disposal is designed to separate much of the radioactive material from other waste components. Nearly all the radioactivity in the waste originates from radionuclides with half-lives of about 30 years or less. The relatively short half-lives of most of the radionuclides in the waste means that within 30 years, about 50 percent of the current radioactivity will have decayed away, and within 100 years this figure will rise to more than 90 percent. Figure 1 shows the pattern of decay, using 2002 to 2102 as the 100-year period. Extending the analysis beyond the 100-year period shown in the figure, in 300 years, 99.8 percent of the radioactivity will have decayed, leaving 0.2 percent of the current radioactivity remaining. Despite the relatively rapid decay of most of the current radioactivity, some radionuclides have half-lives in the hundreds of thousands of years and will remain dangerously radioactive for millions of years. Some of these long-lived radionuclides are potentially very mobile in the environment and therefore must remain permanently isolated. If these highly mobile radionuclides leak out or are released into the environment, they can contaminate the soil and water. DOE plans to isolate the radioactive components and prepare the waste for disposal through a multi-step treatment process. DOE expects this process to concentrate at least 90 percent of the radioactivity into a much smaller volume that can be permanently isolated for at least 10,000 years in a geologic repository. The portion of the waste not sent to the geologic repository will have relatively small amounts of radioactivity and long-lived radionuclides. Based on current disposal standards used by the NRC, if the radioactivity of this remaining waste is sufficiently low, it can be disposed of on site near the surface of the ground, using less complex and expensive techniques than those required for the highly radioactive portion. DOE plans to dispose of this waste on site in vaults or canisters, or at other designated disposal facilities. DOE has successfully applied this process in a demonstration project at the West Valley site in New York State. At West Valley, separation of the low-activity portion from the high-level portion of the waste reduced by 90 percent the quantity of waste requiring permanent isolation and disposal at a geologic repository. The high-level portion was stabilized in a glass material (vitrified) and remains stored at the site pending completion of the high-level waste geologic repository and resolution of other issues associated with disposal costs. The remaining low-activity portion was mixed with cement-forming materials, poured into drums where it solidified into grout (a cement-like material), and remains stored on site, awaiting shipment to an off-site disposal facility. DOE’s new initiative, implemented in 2002, attempts to address the schedule delays and increasing costs DOE has encountered in its efforts to treat and dispose of high-level waste. This initiative is still evolving. As of April 2003, DOE had identified several strategies to help reduce the time needed to treat and dispose of the waste. Based on these strategies, DOE estimated that it could reduce the waste cleanup schedule by about 20 to 35 years at its high-level waste sites and save about $29 billion compared to the existing program baseline. While some degree of savings is likely if the strategies are successfully implemented, the extent of the savings is still uncertain. Many of DOE’s proposals to speed cleanup and reduce environmental risk involve ways to do one or more of the following: Deal with some tank waste as low-level or transuranic waste, rather than as high-level waste. Doing so would eliminate the need to vitrify the waste for off-site disposal in the geologic repository for high-level waste. Complete the waste treatment more quickly by using additional or supplemental technologies. For example, DOE’s Hanford Site is considering using up to four supplemental technologies, in addition to vitrification, to process its low-activity waste. DOE believes these technologies are needed to help it meet a schedule milestone date of 2028 agreed to with regulators to complete waste processing. Without these technologies, DOE believes waste treatment would not be completed before 2048. Segregate the waste more fully than initially planned and tailor waste treatment to each of the waste types. By doing so, DOE plans to apply less costly treatment methods to waste with lower concentrations of radioactivity. Close waste storage tanks earlier than expected, thereby avoiding the operating costs involved in maintaining the tanks and monitoring the wastes. Table 1 summarizes the estimated cost savings for each DOE site if accelerated proposals for cleaning up high-level waste are successfully implemented. Our review indicates that DOE’s current estimate of $29 billion may not yet be reliable and that the actual amount to be saved if DOE successfully implements the alternative waste treatment and disposal strategies may be substantially different from what DOE is projecting. We have several concerns about the reliability and completeness of the estimate. These concerns include the accuracy of baseline cost estimates from which savings are calculated, whether all appropriate costs are included in the analysis, and whether the savings estimates properly reflect the timing of the savings or uncertainties. DOE’s current lifecycle cost baseline is used as the base cost from which potential savings associated with any improvements are measured. However, in recent years, we and others have raised concerns about the reliability of DOE’s baseline cost estimates. In a 1999 report, we noted that DOE lacked a standard methodology for sites to use in developing their lifecycle cost baseline, raising a concern about the reliability of data used to develop these cost estimates. DOE’s Office of Inspector General also raised a concern in a 1999 review of DOE project estimates, noting that several project cost estimates examined were not supported or complete. DOE acknowledged in its February 2002 review of the cleanup program that baseline cost estimates do not provide a reliable picture of project costs. Some of DOE’s savings may be based on incomplete estimates of the costs for the accelerated proposals. According to Office of Management and Budget (OMB) guidance on developing cost estimates, agencies should ensure that all appropriate costs are addressed in the estimate. However, DOE has not always done so. For example, the Idaho National Laboratory’s estimated savings of up to $7 billion is based, in large part, on eliminating the need to build a vitrification facility to treat its waste. However, the waste may have to undergo an alternative treatment method before it can be accepted at a geological repository, and the Idaho National Laboratory is considering four different technologies for doing so. Nevertheless, DOE’s current savings estimate reflects the potential cost of only one of those technologies. DOE has not yet developed the costs of using any of the other waste treatment approaches. DOE noted that the accelerated lifecycle estimate could likely change depending on which one of the technologies is selected and the associated costs of treating the waste are developed. According to OMB guidance, agencies should ensure that the timing of when the savings will occur is accounted for, that uncertainties are recognized and quantified where possible, and that nonbudgetary impacts, such as a change in the level of risk to workers, are quantified, or at least described. We found problems in all three areas. Regarding the time value of money, applying OMB guidance would mean that estimates of savings in DOE’s accelerated plans should reflect a comparison of its baseline cost estimate with the alternative, expressed in a “present value,” where the dollars are discounted to a common year to reflect the time value of money. Instead, DOE’s savings estimates generally measure savings by comparing dollars in different years. For example, the Savannah River Site estimates a savings of nearly $5.4 billion by reducing by 8 years (from 2027 to 2019) the time required to process its high-level waste. Adjusting the savings estimate to present value in 2003 results in a savings of $2.8 billion in 2003 dollars. Regarding uncertainties, in contrast to OMB guidance, the DOE savings estimates generally do not consider uncertainties. For example, the savings projected in the Idaho National Laboratory’s accelerated plan reflect the proposal to no longer build the vitrification facility and an associated reduction in operations costs. However, the savings do not account for uncertainties such as whether alternatives to vitrification will succeed and at what cost. Rather than reflecting uncertainties by providing a range of savings, DOE’s savings estimate is a single point estimate of $7 billion. Regarding nonbudgetary impacts, DOE’s savings estimates generally do not fully assess the value of potential nonbudgetary impacts, such as a change in the level of risk to workers or potential effects on the environment. OMB guidelines recommend identification and, where possible, quantification of other expected benefits and costs to society when evaluating alternative plans. For example, the Idaho National Laboratory’s accelerated plan does not assess potential increases in environmental risk, if any, from disposing of the waste without stabilizing it into a vitrified form. By not assessing these benefits and risks to workers and the environment, DOE leaves unclear how important these risks and trade-offs are to choosing an alternative treatment approach. DOE faces significant legal and technical challenges in achieving the cost and schedule reductions proposed in its new initiative. On the legal side, DOE’s proposals depend heavily on the agency’s authority to apply a designation other than “high-level waste” to the low-activity portion of the waste stream, so that this low-activity portion does not have to be disposed of more expensively as high-level waste. The portion of DOE’s order setting out criteria for making such determinations has been invalidated in a recent court ruling. On the technical side, DOE’s proposals rest heavily on the successful application of waste separation methods that are still under development and will not be fully tested before being put in place. DOE’s track record in this regard has not been strong; it has had to abandon past projects that were also based on promising—but not fully tested—technologies. Either or both of these challenges could limit the potential savings from DOE’s accelerated cleanup initiative. DOE has traditionally managed all of the wastes in its tanks as high-level waste because the waste resulted primarily from the reprocessing of spent nuclear fuel and contains significant amounts of radioactivity. However, by separating the waste into high-level and low-activity portions and managing the low-activity portion as something other than high-level waste, DOE could use less costly and less complicated treatment approaches. DOE has developed guidelines for deciding when waste in the tanks should not be considered high-level waste. In 1999, under Order 435.1, DOE formalized its process for determining which waste is incidental to reprocessing (“incidental waste”), not high level waste, and therefore will not be sent to a geological repository for high-level waste disposal. This process provides a basis for DOE to treat and dispose of some portion of its wastes less expensively as low-level or transuranic wastes. DOE’s ability to define some waste as incidental to reprocessing, and to then follow a different set of treatment and disposal requirements for that waste, is central to its overall strategy for addressing its tank waste. For example, DOE planned to use its incidental waste process to manage about 90 percent of its 54 million gallons of tank waste at the Hanford Site as low-level waste, rather than process it through a high-level waste vitrification facility. Using that approach, most of the waste would be eligible for treatment and disposal on site. Such an approach would save billions compared to treating all of the waste as high-level waste and sending it for disposal in a high-level waste geologic repository. A recent court ruling precludes DOE from reclassifying some of its waste as other than high-level waste. In March 2002, the Natural Resources Defense Council and others filed a lawsuit challenging DOE’s authority to manage its wastes through its incidental waste process. The plaintiffs alleged that DOE arbitrarily established the incidental waste determination process without proper regard for the law or properly establishing a justification for this process. A primary concern of the plaintiffs was that DOE would use its incidental waste process to permanently leave intensely radioactive waste sediments in the tanks with only minimal treatment. The lawsuit alleged that DOE’s incidental waste process improperly allows DOE to reclassify high-level waste as incidental waste that does not need to be treated in the same way as high-level waste. According to the plaintiffs, the Nuclear Waste Policy Act defines all waste originating from a given source—that is, from reprocessing of spent nuclear fuel—as high-level waste and requires that such waste be managed as high-level waste, yet DOE has chosen to differentiate its wastes according to the level of radioactivity and manage them accordingly. In a July 3, 2003 ruling on the lawsuit, the court agreed with the plaintiffs, stating that the portion of DOE’s Order 435.1 setting out its incidental waste determination process violates the Nuclear Waste Policy Act and thus is invalid. The court’s ruling could seriously hinder DOE’s efforts to implement its accelerated treatment and disposal strategies. Under the ruling, DOE’s incidental waste determinations cannot be implemented. Since the start of the lawsuit, DOE had not implemented any of its approved incidental waste determinations and had not yet decided whether to defer or proceed with its pending incidental waste determinations—such as those for closing tanks at the Savannah River Site and Idaho National Laboratory. If DOE appeals the court ruling, a lengthy legal process could follow. A lengthy legal process will also likely delay treatment plans for this waste and delay closing tanks on an accelerated schedule. For example, the Idaho National Laboratory planned to begin closing tanks in the spring of 2003, pending approval of an incidental waste determination that would allow DOE to close the tanks by managing tank waste residuals as low- level waste. A DOE official at the Idaho National Laboratory told us that while a delay of several months would not immediately threaten schedule dates, a delay beyond 24 months would seriously affect the site’s ability to meet its accelerated 2012 date to close all of the tanks. If the court’s ruling invalidating DOE’s incidental waste determination process is upheld, DOE may need to find an alternative that would allow it to treat waste with lower concentrations of radioactivity less expensively. Searching for such an alternative could delay progress at all three of DOE’s high-level waste sites that rely on incidental waste determinations. If DOE cannot meet its accelerated schedules, then potential savings are in jeopardy. At this point, the department does not appear to have a strategy to avoid the potential effects of challenges to its incidental waste determination authority, either from the current court ruling or future challenges. At the time of our report, DOE officials told us that they believed the department would prevail in the legal challenge. DOE believed it would be premature to explore alternative strategies to overcome potentially significant delays to the program that could result from a protracted legal conflict or from an adverse decision. Such strategies could range from exploring alternative approaches for establishing an incidental waste regulation to asking that the Congress provide legislative authority for DOE to implement an incidental waste policy. Like the ability to determine that some waste is incidental to reprocessing, the ability to separate the waste components is important to meet waste cleanup schedule and cost goals. If the waste is not separated, all of it— about 94 million gallons—may have to be treated as high-level waste and disposed of in the geological repository. Doing so would require a much larger repository than currently planned, and drive up disposal costs by billions of dollars. Successful separation will substantially reduce the volume of waste needing disposal at the planned repository, as well as the time and cost required to prepare it for disposal, and allow less expensive methods to be used in treating and disposing of the remaining low- activity waste. The waste separation process is complicated, difficult, and unique in scope at each site. The waste differs among sites not only in volume but also in the way it has been generated, managed, and stored over the years. The challenge to successfully separate the waste is significant at the Hanford Site, where DOE intends to build a facility for separating the waste before fully testing the separation processes that will be used. The planned laboratory testing includes a combination of pilot-scale testing of major individual processes and use of operational data for certain of those processes for which DOE officials said they had extensive experience. However, integrated testing will not be performed until full-scale facilities are constructed. DOE plans to fully test the processes for the first time during the operational tests of the newly constructed facilities. This approach does not fully reflect DOE guidance, which calls for ensuring that new or complex technology is mature before integrating it into a project. Specifically, DOE’s Project Management Order 413.3 requires DOE to assess the risks associated with technology at various phases of a project’s development. For projects with significant technical uncertainties that could affect cost and schedule, corrective action plans to address these uncertainties are required before the projects can proceed. In addition, DOE’s supplementary project management guidance suggests that technologies be developed to a reasonable level of maturity before a project progresses to full implementation to reduce risks and avoid cost increases and schedule delays. The guidance suggests that DOE avoid the risk of designing facilities concurrently with technology development. The laboratories working to develop Hanford’s waste separation process have identified several technical uncertainties, which they are working to address. These uncertainties or critical technology risks include problems with separating waste solids through an elaborate filtration system, problems associated with mixing the waste during separation processes, and various problems associated with the low-activity waste evaporator. Given these and other uncertainties, Hanford’s construction contractor and outside experts have seen Hanford’s approach as having high technical risk and have proposed integrated testing during project development. However, DOE and the construction contractor eventually decided not to construct an integrated pilot facility and instead to accept a higher-risk approach. DOE officials said they wanted to avoid increasing project costs and schedule delays, which they believe will result from building a testing facility. Instead, Hanford officials said that they will continue to conduct pilot-scale tests of major separation processes. DOE officials said they believe this testing will provide assurance that the separation processes will function in an integrated manner. After the full- scale treatment facilities are constructed, DOE plans to fully test and demonstrate the separation process during facility startup operations. The consequences of not adhering to sound technology development guidelines can be severe. At the Savannah River Site, for example, DOE invested nearly $500 million over nearly 15 years to develop a waste separation process, called in-tank precipitation, to treat Savannah River’s high-level waste. While laboratory tests of this process were viewed as successful, DOE did not adequately test the components until it started full-scale operations. DOE followed this approach, in part, because the technology was commercially available and considered “mature.” However, when DOE started full-scale operations, major problems occurred. Benzene, a dangerously flammable byproduct, was produced in large quantities. Operations were stopped after DOE spent about $500 million because experts could not explain how or why benzene was being produced and could not determine how to economically reconfigure the facility to minimize it. Consequences of this technology failure included significant cost increases, schedule delays, a full-scale waste separation process that did not work, and a less-than-optimum waste treatment operation. Savannah River is now developing and implementing a new separation technology at an additional cost of about $1.8 billion and a delay of about 7 years. Subsequent assessments of the problems that developed at Savannah River found that DOE (1) relied on laboratory-scale tests to demonstrate separation processes, (2) believed that technical problems could be resolved later during facility construction and startup, and (3) decided to scale up the technology from lab tests to full-scale without the benefit of using additional testing facilities to confirm that processes would work at a larger scale. Officials at Hanford are following a similar approach. Several experts with whom we talked cautioned that if separation processes at Hanford do not work as planned, facilities will have to be retrofitted, and potential cost increases and schedule delays would be much greater than any associated with integrated process testing in a pilot facility. In addition to the potential cost savings identified in the accelerated site cleanup plans, DOE continues to develop and evaluate other proposals to reduce costs but is still assessing them. Although the potential cost savings have not been fully developed, they could be in the range of several billion dollars, if the proposals are successfully implemented. At the Savannah River and Hanford sites, for example, DOE is identifying ways to increase the amount of waste that can be placed in its high-level waste canisters to reduce treatment and disposal costs. DOE also has a number of initiatives under way to improve overall program management. However, we are concerned that the initiatives may not be adequate. In our examinations of problems that have plagued DOE’s project management over the years, three contributing factors often emerged— making key project decisions without rigorous analysis, incorporating new technology before it has received sufficient testing, and using a “fast-track” approach (concurrent design and construction) on complex projects. Ensuring that these weaknesses are addressed as part of its program management initiatives would further improve the management of the program and increase the chances for success. DOE is continuing to identify other proposals for reducing costs under its accelerated cleanup initiative. Among the proposals that DOE is considering, the ones that appear to offer significant cost savings opportunities would increase the amount of waste placed in each disposal canister. The amount of waste that can be placed into a canister depends on a complex set of factors, including the specific mix of radioactive material combined with other chemicals in the waste, such as chromium and sulfate, that affect the processing and quality of the immobilized product. These factors affect the percentage of waste than can be placed in each canister because they indicate the likelihood that radioactive constituents could move out of the immobilizing glass medium and into the environment. The greater the potential for the waste to become mobile, the lower the allowable percentage of waste and the higher the percentage of glass material that must be used. Savannah River officials believe they can increase the amount of waste loaded in each canister from 28 percent to about 35 percent, and for at least one waste batch, to nearly 50 percent. In June 2003, Savannah River began to implement this new process to increase the amount of waste in each canister. If successful, Savannah River’s improved approach could reduce the number of canisters needed by about 1,000 canisters and save about $2.7 billion, based on preliminary estimates. Other efforts to increase waste loading of the canisters are also under way that, if successful, may permit further cost savings of about $1.7 billion. The Hanford Site is also exploring ways to decrease the numbers of waste canisters that will be needed by using waste forms other than the standard borosilicate glass. This effort is in a very early stage of development and cost-savings estimates have not been fully developed. In addition to site-specific proposals for saving time and money, DOE is also undertaking management improvements using teams to study individual issues. Nine teams are currently in place, while other teams to address issues such as improving the environmental review process to better support decision making have not yet been formed. Each team has a disciplined management process to follow, and even after the teams’ work is completed, any implementation will take time. These efforts are in the early stages, and therefore it is unclear if they will correct the performance problems DOE and others have identified. We are concerned that these management reforms may not go far enough in addressing performance problems with the high-level waste program. Our concerns stem from our review of initiatives under way in the management teams, our discussions with DOE officials, and our past and current work, as well as work by others inside and outside DOE. We have identified three recurring weaknesses in DOE’s management of cleanup projects that we believe need to be addressed as part of DOE’s overall review. These weaknesses cut across the various issues that the teams are working on and are often at the center of problems that have been identified. Two of these weaknesses have been raised earlier in this testimony—lack of rigor in the analysis supporting key decisions, and incorporating technology into projects before it is sufficiently mature. The final area of weakness involves using “fast-track” methods to begin construction of complex facilities before sufficient planning and design have taken place. DOE’s project management guidance emphasizes the importance of rigorous and current analysis to support decision making during the development of DOE projects. Similarly, OMB guidance states that agencies should validate earlier planning decisions with updated information before finalizing decisions to construct facilities. This validation is particularly important where early cost comparisons are susceptible to uncertainties and change. DOE does not always follow this guidance, yet no DOE management team appears to be addressing this weakness. Proceeding without rigorous review has been a recurring cause of many of the problems we have identified in past DOE projects. For example, the decision at Hanford to construct a vitrification plant to treat Hanford’s low-activity waste has not been validated with updated information. Hanford’s primary analysis justifying the cost of this approach was prepared in 1999 and was based on technical performance data, disposal assumptions, and cost data developed in the early to mid-1990s—conditions that are no longer applicable. Subsequent analyses have continued to rely on this data. However, since that time conditions have changed, including the performance capabilities of alternative technologies such as grout, the relative cost of different technologies, and the amount of waste DOE intends to process through a vitrification facility. DOE officials disagree with our assessment of their analysis, stating that a comprehensive analysis was conducted in the spring of 2003. However, DOE’s high-level waste project team agreed that the DOE officials at Hanford had not performed a current, rigorous analysis of low-activity waste treatment options including the use of grout as an alternative to vitrification, and the team encouraged the Hanford site to update its analysis based on current waste treatment and disposal assumptions. DOE officials at Hanford told us they do not plan to reassess the decision to construct a low-activity vitrification facility because their compliance agreement with the state of Washington calls for vitrification of this waste. They also stated that vitrification is a technology needed for destroying hazardous constituents in a portion of the waste. Our work on Department of Defense acquisitions has documented a set of “best practices” used by industry for integrating new technology into major projects. We reported in July 1999 that the maturity of a technology at the start of a project is an important determinant of success. As technology develops from preconceptual design through preliminary design and testing, the maturity of the technology increases and the risks associated with incorporating that technology into a project decrease. Waiting until technology is well-developed and tested before integrating it into a project will greatly increase the chances of meeting cost, schedule, and technical baselines. On the other hand, integrating technology that is not fully mature into a project greatly increases the risk of cost increases and schedule delays. According to industry experts, correcting problems after a project has begun can cost 10 times as much as resolving technology problems beforehand. DOE’s project management guidance issued in October 2000 is consistent with these best practices. The guidance discusses technology development and sets out suggested steps to ensure that new technology is brought to a sufficient level of maturity at each decision point in a project. For example, during the conceptual design phase of a project, “proof of concept” testing should be performed before approval to proceed to the preliminary design phase. Furthermore, the guidance states that attempting to concurrently develop the technology and design the facility for a project poses ill-defined risks to the project. Nevertheless, as we discussed earlier, DOE sites continue to integrate immature technologies into their projects. For example, as discussed earlier, DOE is constructing a facility at the Hanford Site to separate high-level waste components, although integrated testing of the many steps in the separations process has not occurred and will not occur until after the facility is completed. DOE, trying to keep the project on schedule and within budget, has decided the risks associated with this approach are acceptable. However, there are many projects for which this approach created schedule delays and unexpected costs. The continued reliance on this approach in the face of so many past problems is a signal of an area that needs careful attention as DOE proceeds with its management reform efforts. At present, no DOE management team is addressing this issue. Finally, we have concerns about DOE’s practice of launching into construction of complex, one-of-a-kind facilities well before their final design is sufficiently developed, again in an effort to save time and money. Both DOE guidance and external reviews stress the importance of adequate upfront planning before beginning project construction. DOE’s project management guidance identifies a series of well-defined steps before construction begins and suggests that complex projects with treatment processes that have never before been combined into a facility do not lend themselves to being expedited. However, DOE guidance does not explicitly prohibit a fast-track—or concurrent design and construction—approach to complex, one-of-a-kind projects, and DOE often follows this approach. For example, at the Hanford Site, DOE is concurrently designing and constructing facilities for the largest, most complex environmental cleanup job in the United States. Problems are already surfacing. Only 24 months after the contract was awarded, the project was 10 months behind schedule dates, construction activities have outpaced design work causing inefficient work sequencing, and DOE has withheld performance fee from the design/construction contractor because of these problems. DOE experienced similar problems in concurrent design and construction activities on other waste treatment facilities. Both the spent nuclear fuel project at Hanford and the waste separations facility at the Savannah River Site encountered schedule delays and cost increases in part because the concurrent approach led to mistakes and rework, and required extra time and money to address the problems. In its 2001 follow-up report on DOE project management, the National Research Council noted that inadequate pre-construction planning and definition of project scope led to cost and schedule overruns on DOE’s cleanup projects. The Council reported that research studies suggest that inadequate project definition accounts for 50 percent of the cost increases for environmental remediation projects. Again, no DOE team is specifically examining the “fast-track” approach, yet it frequently contributed to past problems and DOE continues to use this approach. DOE’s efforts to improve its high-level waste cleanup program and to rein in the uncontrolled growth in project costs and schedules are important and necessary. The accelerated cleanup initiative represents at least the hope of treating and disposing of the waste in a more economical and timely way, although the actual savings are unknown at this time. Furthermore, specific components of this initiative face key legal and technical challenges. Much of the potential for success rested on DOE’s ability to dispose of large quantities of waste with relatively low concentrations of radioactivity on site by applying its incidental waste process. Recently, a court ruled that the portion of DOE’s order setting out its incidental waste determination process violates the Nuclear Waste Policy Act and is invalid. Thus, DOE is precluded from implementing this element of its accelerated initiative. Success in accelerating cleanup also rests on DOE’s ability to obtain successful technical performance from its as-yet unproven waste separation processes. Any technical problems with these processes will likely result in costly delays. At DOE’s Hanford Site, we believe the potential for such problems warrants reconsidering the need for more thorough testing of the processes, before completing construction of the full-scale waste separation facility. DOE’s accelerated cleanup initiative should mark the beginning, not the end, of DOE’s efforts to identify other opportunities to improve the program by accomplishing the work more quickly, more effectively, or at less cost. As DOE continues to pursue other management improvements, it should reassess certain aspects of its current management approach, including the quality of the analysis underlying key decisions, the adequacy of its approach to incorporating new technologies into projects, and the merits of a fast-track approach to designing and building complex nuclear facilities. Although the challenges are great, the opportunities for program improvements are even greater. Therefore, DOE must continue its efforts to clean up its high-level waste while demonstrating tangible, measurable program improvements. In the report being released today, we made several recommendations to help DOE manage or reduce the legal and technical risks faced by the program as well as to strengthen DOE’s overall program management. DOE agreed to consider seeking clarification from Congress regarding its authority to define some waste as incidental to reprocessing, if the legal challenge to its authority significantly affected DOE’s ability to achieve savings under the accelerated initiative. Regarding our recommendations to conduct integrated pilot-scale testing of the separations facility at Hanford before construction is completed, and to make other management improvements to address the weaknesses I just discussed, DOE’s position is that it has already taken appropriate steps to manage the technology risks and strengthen its management practices. We disagree and believe that implementing all of our recommendations would help reduce the risk of costly delays and improve overall management of DOE’s entire high- level waste program. - - - - - Thank you, Mr. Chairman and Members of the Subcommittee. That concludes my testimony. I would be pleased to respond to any questions that you may have. For further information on this testimony, please contact Ms. Robin Nazzaro at (202) 512-3841. Individuals making key contributions to this testimony included Carole Blackwell, Robert Crystal, Doreen Feldman, Chris Hatscher, George Hinman, Gary Jones, Nancy Kintner-Meyer, Avani Locke, Mehrzad Nadji, Cynthia Norris, Tom Perry, Stan Stenersen, and Bill Swick. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Energy (DOE) oversees the treatment and disposal of 94 million gallons of highly radioactive nuclear waste from the nation's nuclear weapons program, currently at DOE sites in Washington, Idaho, and South Carolina. In 2002, DOE began an initiative to reduce the estimated $105-billion cost and 70-year time frame of this cleanup. GAO was asked to testify on the status of this initiative, the legal and technical challenges DOE faces in implementation, and any further opportunities to reduce costs or improve program management. GAO's testimony is based on a report (GAO-03-593) released at the hearing. DOE's initiative for reducing the costs and time required for cleanup of high-level wastes is still evolving. DOE's main strategy for treating high-level waste continues to include separating and concentrating much of the radioactivity into a smaller volume for disposal in a geologic repository. Under the initiative, DOE sites are evaluating other approaches, such as disposing of more waste on site. DOE's current savings estimate for these approaches is $29 billion, but the estimate may not be reliable or complete. For example, the savings estimate does not adequately reflect uncertainties or take into account the timing of when savings will be realized. DOE faces significant legal and technical challenges to realize these savings. A key legal challenge involves DOE's process for deciding that some waste with relatively low concentrations of radioactivity can be treated and disposed of on-site. A recent court ruling invalidated this process, putting the accelerated schedule and potential savings in jeopardy. A key technical challenge is that DOE's approach relies on laboratory testing to confirm separation of the waste into high-level and low-activity portions. At the Hanford Site in Washington State, DOE plans to build a facility before conducting integrated testing of the waste separation technology--an approach that failed on a prior major project. DOE is exploring proposals, such as increasing the amount of high-level waste in each disposal canister, that if successful could save billions of dollars more than the current $29 billion estimate. However, considerable evaluation remains to be done. DOE also has opportunities to improve program management by fully addressing recurring weaknesses GAO has identified in DOE's management of cleanup projects, including the practice of incorporating technology into projects before it is sufficiently tested.
“Mobility” is an overarching term that describes the ability of employees, enabled by information technology, to perform their work in areas other than an assigned office or workstation. GSA has been reporting since the late 1990s that federal agencies could do more to reduce their space needs and achieve savings by recognizing that many of their employees are able to perform their work either outside the office or in smaller workspaces. GSA has noted that mobility can improve employees’ work- life balance, such as by giving employees greater control over their schedule, and can reduce employees’ commuting time. GSA has also noted that federal agencies have not yet widely embraced mobility, but trends in the private sector, workforce demographics, and technology suggest that agencies could do so in the future. Organizing the workplace around mobility can reduce the size and number of dedicated individual workspaces and accommodate the same number of employees in less total physical space. For example, when agencies provide the necessary technologies, employees may perform their work in areas other than an assigned office or workstation. It may be possible for agencies to reduce their space needs when this mobility is combined with hoteling, which means that employees give up their individual, permanent space and use shared, nonpermanent workspaces when they are in the office.more of an organization’s space to collaborative areas, such as conference rooms, team rooms, and informal meeting rooms. Figure 1 illustrates one possible way an agency could reduce its per-employee footprint by redesigning space to reflect mobility. Officials at the five agencies we reviewed told us they are exploring or taking actions to reduce their space needs and achieve space efficiencies. These actions depend, in part, on a mobile workforce and include increasing telework participation, introducing hoteling, and reducing the size of individual workspaces. However, some agency officials we spoke with pointed out that actions such as increasing telework participation or implementing a hoteling program may not be appropriate in all instances. Some employees, for example, may not want to telework for personal reasons or be unable to telework due to the requirements of their work—these include employees who work with sensitive or classified documents, or who interact with members of the public as part of the jobs. With respect to the agencies we reviewed: USPTO has been taking steps to reduce its space needs as a result of increased workforce mobility for more than a decade. For example, under USPTO’s Patent Hoteling Program, more than 4,000 full-time employees—or about 36 percent of its workforce—telework 4 to 5 days per week. These full-time teleworking employees do not have a personal workspace in the office; instead, they use an automated system to reserve a workspace for times when they need to be in the office. According to USPTO officials, most of their employees are patent examiners who perform solitary, independent work, which makes them good candidates for teleworking. USPTO officials told us their Patent Hoteling Program has enabled USPTO to accommodate new hires without having to increase its space needs. Further, we note that there are indications that USPTO has avoided real estate costs as a result of its efforts. For example, in 2012, the Department of Commerce reported, that as a result of USPTO’s Patent Hoteling Program, USPTO has avoided almost $17 million in In real estate costs annually since 2006 when the program started.addition, an analysis of the costs and benefits of USPTO’s Patent Hoteling Program that USPTO conducted for fiscal year 2012 indicates that USPTO’s estimated savings could be larger. However, while we reviewed documentation provided by USPTO regarding its estimated cost savings, we could not verify USPTO’s estimates as some key assumptions—such as rental costs per square feet—were not supported. In designing its headquarters renovation, GSA officials told us they made extensive use of open, collaborative work environments; eliminated private offices for most employees, including senior-level employees; established a target ratio of one workstation for every two employees; and implemented a hoteling program for all employees. GSA also eliminated cubicles in favor of a workbench configuration of space, as shown in figure 2. GSA estimates that its renovation, scheduled to be complete before the end of 2013, will allow it to eliminate the need for additional leased space at four locations in the Washington D.C. area, resulting in projected savings of approximately $25 million in annual lease payments, and about a 38 percent reduction in needed office space. However, this estimate does not include the costs of GSA’s renovation, which has not been completed. The IRS implemented new space standards in October 2012 that reflect changes in the mobility of its workforce and is applying these new standards as part of its space-planning efforts. Under the IRS’s new space standards, employees who work out of the office an average of 80 hours or more per month no longer have a dedicated workstation and must hotel with other employees who are also out of the office an average of 80 hours or more per month. USDA set agency-wide goals for increasing the number of its employees with approved telework agreements as well as its overall telework participation rates for fiscal year 2013. Officials told us they believe that the increased use of telework could allow the department to reduce its real estate needs. In addition, officials of the Forest Service, an agency within USDA, told us that the Forest Service plans to increase telework participation, utilize hoteling, and decrease the size of individual workstations as part of its headquarters renovation. The renovated space, which formerly provided space for 420 employees, is expected to provide workspace for approximately 760 employees. Forest Service officials said the agency estimates saving at least $5 million in annual rent as a result of these efforts. However, this estimate does not include the costs for its headquarters renovation, which has not yet been completed. ATF officials told us they are developing a workstation-sharing policy for those employees who telework 3 or more days per week. According to ATF officials, this policy, which they expect to have in place in October 2013, will help ATF reduce the amount of space it needs to lease in the future. Officials at each of the five agencies we reviewed told us that they expect their efforts will result in space reductions and cost savings over time. However, it is too early to determine the specific cost savings that might be realized for actions agencies like GSA and the Forest Service are taking, given that they are in progress. Since the late 1990s, GSA has issued several reports that provide general guidance to assist with agencies’ space-planning efforts in an environment of increased workforce mobility. These reports have highlighted the variety of actions that agencies can take to achieve space efficiencies and help ensure that workspaces adequately support their agencies’ missions. GSA’s research on how public and private sector organizations use office space shows that office space in general, and federal office space specifically, is often underutilized as employees work elsewhere. For example, in 2006, GSA reported that its surveys of federal workspaces indicate that employees are typically seated at their desks less than one-third of the average work day because they are often working elsewhere—collaborating with team members, working off-site, or in meetings—the rest of the time. In its reports, GSA has provided examples of how various federal agencies have achieved space efficiencies, including reducing their space needs, through reconfiguring the layout of existing workstations and implementing various alternative work arrangements, such as hoteling and increased use of telework. While federal agencies are ultimately responsible for addressing their changing space needs, they can also seek assistance from GSA. An official at the Forest Service’s headquarters told us the Forest Service worked with GSA to measure utilization of its leased office space in the Washington D.C. area. According to the official, working with GSA helped provide the Forest Service with information on how its space was being used, and helped the Forest Service determine that it could reduce its office space needs by 25 percent. In late 2011, GSA established a Workplace Program Management Office designed to help agencies explore and implement mobility initiatives. GSA helps agencies explore mobility as part of a broader approach to space planning as well as engaging workplace strategists in developing solutions focused on mission, people, and space opportunities. For example, since 2011 GSA has offered two customized programs known as Client Portfolio Planning and National Engagements for Workplace Services. In part, these programs are designed to help agencies explore mobility and achieve space efficiencies and also determine the type of workplace configuration that best supports the accomplishment of the agency’s mission. According to GSA officials, these programs also offer additional benefits, such as increased employee satisfaction. GSA officials stressed that customized programs work better because each agency has a unique mission and culture. GSA officials describe the Client Portfolio Planning program as one that will help an agency find ways to increase energy efficiency and mobility, potentially allowing an agency to reduce its need for office space. GSA collects information about the client agency’s space use and needs through various instruments, including employee surveys. GSA subsequently works with the agency to identify potential opportunities and develops recommendations. According to GSA officials, having a client agency actively participate in this process is a key factor to achieving success in implementing GSA recommendations. If an agency chooses to act upon GSA’s recommendations, it could take years to realize savings due to factors such as the timing of leases, the cost of reconfiguration, and negotiation with employee organizations. GSA officials told us that they work with three new agencies each year and currently have nine departments or agencies participating in this program. Unlike the Client Portfolio Planning program, GSA’s National Engagements for Workplace Services program is designed to help agencies examine their operations and identify new ways of working that leverage technology and furniture solutions. According to GSA, the information obtained by participating in this program can help guide an agency’s space-planning efforts and mobility initiatives and provide a business case for making changes. GSA funds initial services, such as performing space utilization studies, evaluating existing workplace conditions, developing alternative work arrangement standards, and implementing pilot programs. The client agency funds subsequent services, such as implementing developed programs and evaluating the results. To date, GSA has completed national engagements with two agencies—the Defense Contract Audit Agency and USPTO—and is in various stages of working with eight other departments or agencies. Of the agencies we reviewed, USPTO has worked with GSA under both programs. According to USPTO officials, as a result of working with GSA, USPTO set a goal of releasing space when leases expire and is exploring opportunities to consolidate additional personnel within its headquarters building. Several factors can affect what strategies or recommendations, if any, GSA’s clients may decide to implement as a result of these programs. These include an agency’s culture, mission, funding sources, and the flexibility it has within its existing leases. For example, USPTO stated that while it concurs with the goal of releasing space when leases expired, its future staffing projections need to be considered when evaluating any proposals. In addition, USPTO noted that it was in ongoing discussions with its employee organization over changes affecting employee workspaces. Also, not all of GSA’s client agencies may complete these programs. For example, GSA officials told us that because one agency did not actively participate during the planning processes of the Client Portfolio Planning program, GSA opted to refocus its efforts on another agency that it perceived was more willing to collaborate in the planning process. GSA officials also noted that because these programs are relatively new, they involve an iterative and ongoing learning process, including the development of tools to help agencies explore the benefits of mobility. For example, GSA is in the process of developing an additional Excel-based tool aimed at helping agencies quantify the benefits and costs of increased telework participation and implementing other alternative work arrangements, such as hoteling. According to GSA officials, this tool will be available to agencies later this year. GSA is also developing tools to help agencies measure the extent to which their office space is being used on a daily basis. For example, GSA is exploring how using mobile devices such as cell phones could provide information electronically on which offices are occupied. According to GSA officials, such tools would allow agencies to collect such information without the need to rely on manually observing the workspace. Our discussions with officials from the five selected agencies and five private sector organizations identified two factors as particularly important to achieving space efficiencies in an environment of increased workforce mobility: acquiring information about how office space is currently used and gaining management and employee support. Leading practices in capital decision-making and OMB guidance have stressed that having accurate data is essential to supporting sound capital planning and decision-making. By measuring how existing space is being used, organizations are better positioned to determine how much space they really need. We have also previously found that people are at the center of any management initiative for serious change, and that leading practices for managing change include ensuring that top leadership is behind transformations and that employees are involved throughout the transformation. Officials from three of the agencies we reviewed told us that organizations must first obtain the data necessary to inform their decision-making about future space needs; such data might include current information on space utilization rates, telework participation, and employee views about alternative work arrangements. For example, GSA officials noted that to obtain this type of information, they count heads (i.e., they manually count the number of offices occupied) and solicit employee opinions on proposed workspace changes. Officials at the other two agencies told us they take similar steps. For example, Forest Service officials told us that when planning to consolidate space in the Washington, D.C., area, they performed head counts twice a day for several weeks to determine the feasibility of implementing hoteling. According to these officials, their data showed that 60 percent of employee workspaces were occupied at any given time. They then used that knowledge to reduce the number of workstations in their renovated space. IRS officials told us that when their agency implemented a new collaborative workspace design at its headquarters, they held focus groups to obtain employee input on various design features. Officials from each of the five private sector organizations we contacted also told us that when working to incorporate elements of mobility into space planning, either for a client or their own organization, data on how employees currently use their space is necessary for informed decision- making. For example, a representative of one organization told us that when working with a client, his organization first acquires data on how that client uses its space by examining costs, office density, and space utilization rates; within his own organization, they perform daily head counts to determine how their office space is being utilized. Similarly, a representative of another organization told us that he advises clients seeking to make physical changes to their workplace that they must first do some research to understand the current level of office utilization. When introducing physical space changes associated with increased workforce mobility—including the loss of dedicated workspace— organizations may encounter resistance from agency leaders, managers, employees, or employee organizations. Officials from all of the agencies and the private sector organizations we contacted described redesigning space to reflect mobility as a significant change. Several noted that employees have traditionally regarded their workspace as their own personal space and that mobility initiatives can result in reduced personal space for employees. Some pointed out that managers also may be uncomfortable with mobility initiatives for various reasons. For example, managers may be uncomfortable supervising employees who work outside the office, or they may perceive a reduction in office space to mean that they or their programs have become less important to the organization. To implement changes, agencies we reviewed, as well as private sector organizations we contacted, have taken a number of actions to gain support. For example, IRS officials told us they worked with agency leadership as well as their union, when they implemented new space standards that reflect changes in the mobility of their workforce. IRS officials told us that they used budgetary pressures as the primary driver to help both managers and employees understand why they needed to substantially change their view of personal workspace. Similarly, Forest Service officials told us they worked with their employee organization when negotiating the space-sharing arrangements they plan to use in their renovated space. Representatives from the private sector told us their organizations took similar steps. For example, a representative of one organization told us that his organization advises clients to ensure, when downsizing offices, that employees understand the link between reducing real estate costs and budgetary sustainability. A representative from another organization told us that when his organization provides space-planning services for clients, it always ensures that employee organizations are consulted. Officials from the agencies we reviewed, as well as private sector organizations we contacted, told us that management needs to gain organizational support for changes and cannot impose change on managers and employees. In their opinion, organizations that have tried to impose such change are less likely to succeed. We provided a draft of this report to OMB, GSA, IRS, USDA, USPTO, and ATF for review and comment. USPTO provided written comments, which are reprinted in appendix I. In its comments, USPTO stated that a February 2012 audit performed by the Department of Commerce’s Inspector General indicated that USPTO has avoided real estate costs as a result of its Patent Hoteling Program. USPTO also stated that this audit indicated that further analysis would point to increased cost avoidance and savings, and that USPTO has since performed an analysis of its Patent Hoteling Program’s costs and benefits. As noted in our report, we reviewed USPTO’s analysis of the costs and benefits of its Patent Hoteling Program. While USPTO’s analysis provided estimates for some of the Patent Hoteling Program’s costs and benefits, we found that some key assumptions USPTO made—such as rental costs per square feet—were not supported. Accordingly, we could not verify USPTO’s estimates. USPTO also provided technical comments, which we incorporated where appropriate. GSA, IRS, and USDA provided technical comments that we incorporated where appropriate. ATF and OMB did not have comments on the report. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Director of OMB; the Administrator of GSA; the Secretary of Agriculture; the Acting IRS Commissioner; the Acting Under Secretary of Commerce for Intellectual Property and Acting Director of the USPTO; and the Director of ATF. Additional copies will be sent to interested congressional committees. We will also make copies available to others upon request, and the report is available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-5731 or wised@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. David J. Wise, at (202) 512-5731 or wised@gao.gov. In addition to the contact named above, Keith B. Cunningham (Assistant Director), Russell C. Burnett, Colin J. Fallon, Robert K. Heilman, Wesley A. Johnson, Terence C. Lam, John P. Robson, James R. Russell, Crystal Wesco, and Nancy Zearfoss made key contributions to this report.
New technologies and the adoption of alternative work arrangements have increasingly enabled employees to perform some aspects of their work outside of the traditional office environment. As requested, GAO examined how aspects of mobility have affected agencies' space needs. This report identifies (1) actions selected agencies have taken as a result of increased workforce mobility to reduce their space needs; (2) the assistance GSA provides federal agencies that are exploring reducing their space needs, at least partly in response to increased workforce mobility; and (3) factors selected agencies and private sector organizations viewed as important to achieving space efficiencies in an environment of increased workforce mobility. GAO focused its review on five agencies: GSA, the Department of Agriculture, the Internal Revenue Service, the Department of Commerce's USPTO, and the Department of Justice's Bureau of Alcohol, Tobacco, Firearms and Explosives. Altogether, these five agencies reported holding or leasing more than 400-million square feet of office space in fiscal year 2011. GAO reviewed agency-specific guidance and other documents related to space planning and conducted interviews with key officials from the selected agencies. GAO also interviewed representatives from five private sector organizations to obtain their perspectives on how the private sector plans for its future space needs. The five selected agencies GAO reviewed are either exploring or taking actions such as increasing "telework" participation and implementing a "hoteling" program--which means that employees give up their individual, permanent space and use shared, nonpermanent workspaces when they are in the office-- to reduce their space needs. For example, as part of its headquarters renovation, the General Services Administration (GSA) is making use of open, collaborative work environments and implementing a hoteling program for all employees. In addition, the Department of Agriculture set agency-wide goals for increasing the number of its employees with approved telework agreements as well as its overall telework participation rates for fiscal year 2013. The U.S. Patent and Trademark Office (USPTO) has taken steps to reduce its space needs as a result of increased workforce mobility for more than a decade, and there are indications that USPTO has avoided real estate costs as a result of its efforts. However, GAO was unable to obtain sufficient information to determine the accuracy and validity of USPTO's estimated cost savings. Beyond USPTO, the agencies GAO reviewed have not yet realized space reductions or cost savings because their efforts are too new. In addition, officials at the selected agencies pointed out that increasing telework participation or implementing a hoteling program may not be appropriate in all instances, such as for employees who work with sensitive or classified documents or who interact with members of the public. GSA offers general guidance as well as customized programs to help guide agencies' space-planning efforts in an environment of increased workforce mobility. In this guidance, GSA has shown that agencies can achieve space efficiencies by reconfiguring the layout of existing workstations and implementing various alternative work arrangements. While agencies may take steps on their own to address their changing space needs, GSA has offered two customized programs since 2011 to assist agencies. These programs are designed to help agencies explore mobility and achieve space efficiencies; however, GSA's client agencies determine whether to act on GSA's recommendations, and it may take years to realize savings due to factors such as the timing of leases and the cost of reconfiguration. GAO's discussions with officials from the selected agencies and private sector organizations identified two factors--acquiring information about the current utilization of office space, and gaining the support of management and employees--that were frequently viewed as important for an organization to achieve space efficiencies in an environment of increased workforce mobility. By measuring how existing space is being used, organizations are better positioned to determine their future space needs. Similarly, by taking steps to obtain the support of leadership and employees, organizations can help facilitate the acceptance of mobility initiatives. Officials described the loss of dedicated workspace resulting from increased mobility as a significant change in the workplace and indicated that organizations that try to impose such changes are less likely to succeed than those that build organizational support. GAO makes no recommendations in this report. USPTO commented on the estimated cost savings of its hoteling programs, as discussed in this report.
NASA established DSN over 40 years ago with the intention of coordinating all deep space communications through a single ground system to improve efficiency and minimize duplication. Today, DSN consists of communications antennas at three major sites around the world—Goldstone, Calif.; Madrid, Spain; and Canberra, Australia. These sites are specifically positioned to offer complete coverage to deep space mission craft regardless of their positions around the Earth. DSN officials informed us that while contractor personnel operate all three sites, NASA owns the physical assets and is responsible for funding all operations at the sites. Each site has a 70-meter antenna, which can provide communications with the most distant spacecraft, and several smaller antennas that can facilitate communications with closer spacecraft or can be arrayed to communicate with more distant missions. NASA’s Jet Propulsion Laboratory is responsible for management of DSN and also serves as the distribution point for data collected from deep space. DSN supports an average of 35 to 40 deep space missions each year. According to program officials, as a mission is being developed, a representative from the DSN program works with the mission team to establish the amount of coverage the mission will need from DSN assets during its lifetime. This coverage includes the amount of time per day for routine communications and also critical coverage of major mission events. In most cases, missions must negotiate with the DSN program because they desire more coverage than DSN can provide. Once the amount of coverage time is established and major mission events are scheduled, DSN commits to that coverage in a Service Agreement with the mission. Within the agreement, DSN commits to providing coverage for 95 percent of the time agreed to with its mission customers, while the remaining 5 percent allows for unexpected disruptions during that coverage. This 95 percent commitment almost guarantees that all critical mission events will be covered without disruption. Once this is put into place, missions are generally free to trade time amongst themselves if priorities change or a particular mission gets kicked off the network due to an unexpected anomaly in the system. The missions that DSN supports are not charged for their usage of the system, unless they require a unique technology that DSN must add to its system in order to provide coverage. This is a relatively rare phenomenon, however. DSN is primarily funded through its managing entity, the Science Mission Directorate, and receives resources consistent with its performance the previous year and its previous year’s budget. DSN works in conjunction with NASA’s other space communications assets to provide coverage to missions at all distances from the Earth. The Ground Network provides communications capabilities to spacecraft in low-Earth orbit. Additionally, the Space Network, including the Tracking and Data Relay Satellite System, is an Earth-based satellite relay system that also facilitates missions in low-Earth orbit. In order for a spacecraft to receive support from all of these communications assets, NASA must ensure they are coordinated and can provide the capabilities for which they are intended. Throughout its history, NASA has had different management structures trying to achieve this coordination. According to NASA officials, from the Apollo missions in the 1960s through 1995, space communications was managed through an agency wide communications entity with budgetary authority to provide appropriate investments in system capabilities. In 1995, this management and budget authority was devolved to a central contract managed out of the Johnson Space Center in an effort to cut costs and streamline maintenance to the assets. The savings from this realignment were never realized for the agency and the communications assets were severely underfunded as a result of how they were managed under this arrangement. Subsequently, management and budget authority for these assets were brought back to NASA headquarters in 2001 and aligned with the mission directorate responsible for the customers each asset served. NASA then created the Space Communication Coordination and Integration Board to oversee the technical integration of these assets into a seamless space communications architecture. This is how space communications assets, including the DSN program, are managed currently at NASA. The NASA Authorization Act of 2005 contains a requirement that the NASA Administrator submit a plan for updating NASA’s space communications architecture for low-Earth orbital operations and deep space exploration so that it is capable of meeting NASA’s needs over the next 20 years. This plan is due to be submitted to the House Committee on Science and the Senate Committee on Commerce, Science and Transportation no later than February 17, 2007. In addition, the Conference Report accompanying the Science, State, Justice, Commerce and Related Agencies Appropriations Law, 2006 requires that NASA include a 10-year funding profile for DSN in its fiscal year 2007 budget request. DSN is currently able to meet most requirements of its existing workload. However, according to program officials, DSN’s current operational ability is no predictor of future success, and they have significant concerns about the ability of the system to continue to meet customer requirements into the future. These concerns are based on the system’s aging infrastructure and projected additional workload on top of servicing existing missions. DSN suffers from an aged, fragile infrastructure. Significant parts of that infrastructure—including many antennas—were first built in the 1950s and 1960s and are showing their age. DSN program officials stated that the Goldstone complex is down, on average, 16 hours per week for maintenance and repairs due to problems associated with its age. While Goldstone contains some of the oldest equipment in the system and the poor condition of much of its equipment characterizes the underlying fragility of the network, operational disruptions occur across the entire network. For instance, the 70-meter dishes are widely regarded by program officials and mission customers as increasingly fragile, which calls into question expectation of their continued reliability. In fact, mission customers shared similar concerns that DSN’s infrastructure is not in the appropriate condition that it should be to support their missions. With increasing use of these assets, they fear service will only deteriorate and more disruptions will occur during service to their missions. Program officials and mission customers provided some examples, as follows, of disruptions that have occurred during service as a result of infrastructure deterioration: During a critical event for the Deep Impact Mission on July 4, 2005, corrosion of the sub reflector on the 70-meter dish at DSN’s Madrid site caused an unexpected disruption in service. In response, program managers had to shift coverage to alternative antennas. While they were able to provide adequate coverage of the event for the Deep Impact Mission, the shift to back-up antennas forced other users off at that time, which meant they lost coverage. In October 2005, a significant power disruption caused by corrosion to a major power line resulted in multiple antennas at the Goldstone complex going offline, resulting in several hours of downtime and a subsequent loss of scientific data. In November 2005, failure of a prime network server resulted in several hours of unexpected downtime, which in turn caused considerable loss of data to four research projects. During this anomaly, the Stardust, Mars Reconnaissance Orbiter, Mars Odyssey and Mars Global Surveyor missions lost a total of 241 minutes of coverage to their missions. Program officials also expressed concern about the possibility of massive antenna failure due to metal fatigue. Ultimately, such a failure would result from a partial or total collapse of an antenna structure. Although no DSN antenna has yet collapsed from fatigue, an antenna in West Virginia similar in design and age to those already used by the DSN program collapsed unexpectedly in 1988. DSN program managers are in the process of finding an engineering firm to conduct a survey of the program’s antenna assets to assess their structural reliability. Beyond that action, program officials rely mostly on their experience and visual observations to assess the condition of these assets. Deferred maintenance also poses a significant challenge to the sustainability of DSN assets. Since 2002, the program has consistently deferred approximately $30 million in maintenance projects each year. These projects are commonly associated with infrastructure that is not directly related to system performance and have been given lower priority when more pressing needs limit the system’s ability to provide coverage for its customers. For example, several roadway, water and electrical projects at the Goldstone facility have consistently been deferred due to the need to address system maintenance needs considered to have become more pressing. Although the program does seek to prioritize its most pressing projects and direct resources to them once its budget is allotted, operating aging facilities and systems inevitably results in the need for new repairs rising unexpectedly, which forces program managers to constantly have to juggle priorities to address them. DSN also faces increasing competition between new and old users for coverage time on the system. There is a growing demand for a level of service that DSN is not likely to be able to provide to its customers. DSN promises 95 percent availability to its mission customers for routine mission coverage. According to program officials, the remaining 5 percent is reserved for unexpected failures and downtimes during mission coverage. They said DSN can maintain its 95 percent commitment to its mission customers within its current mission set. However, as that mission set increases, officials become less confident in their ability to continue to achieve that level of service. New missions are continuing to increase as they have in the past—by some 350 percent over the last 20 years. By the year 2020, DSN is projected to be required to support twice the number of missions it does currently. DSN officials thus find themselves faced with the need to balance this new demand with an equally compelling demand from existing “legacy” missions that have remained operational beyond their original lifetimes but are still returning science data and need to be maintained. Such legacy missions include the following: The Voyager missions—two similar spacecraft launched in 1977 to conduct close-up studies of Jupiter and Saturn, Saturn’s rings, and the larger moons of the two planets—are still supported by DSN today even though their primary missions were completed in 1989. Each mission receives approximately 12 hours of coverage each day using one of the network’s 70-meter dishes. The Mars Rover missions, although scheduled to end their prime missions in mid-2004, have gone well beyond their forecasted lifetimes. Program officials pointed out that even though they did not have a role in the decision to extend the missions, the program continues to allocate funds to support their operations through present day. It is up to the DSN program to determine how best to provide service to its many mission customers, but this task is becoming increasingly complex. The effort to balance conflicting program priorities is a continuing struggle for DSN program managers. So far, DSN has been able to avoid stressing the capacity of the system because a select number of missions it was scheduled to support were either canceled or failed before requiring significant support. However, according to program officials, if the number of missions the system is scheduled to support begins to increase, the amount of service the system can provide will be limited. Further, officials expect that any commitments to provide support for manned missions under the coming Vision for Space Exploration, in addition to what it currently must support, will prevent them from being able to provide necessary coverage to new mission customers or maintain the service guarantee of 95 percent availability to any customer. In addition, the DSN program is planning to begin decommissioning its 26 meter antennas in 2006 due to costs of maintenance associated with their age. Officials told us that they believe the program’s remaining 34- and 70-meter antennas will be unable to sustain the anticipated workload in the very near future, and one projection is that the system will reach capacity in 2013. If this occurs, the opportunity to continue adding new mission customers will be limited and the potential for lost deep space science is significant. DSN’s future utility is also in question because NASA currently does not have a mechanism in place to match funding for space communications assets with program requirements, such as infrastructure and technology development needs, from an agency wide perspective. At the end of 2003, NASA created the Space Communication Coordination and Integration Board with the intent of reviewing requirements for integration of space communications assets into a seamless architecture, but according to agency officials, the Board does not review individual program requirements or have any authority over the allocation of resources to the space communications programs. Instead, funding for space communications capabilities is controlled by the individual communications programs and their associated mission directorates, who may not consider agency wide goals when making investments. This disconnect between requirements and resources has caused program level requirements to be given low priority by the agency, which in turn has forced programs to make tradeoffs to maintain functionality and has offered the potential for programs to make investments that may undercut agency wide goals for space communications. In light of this problem, NASA has recently established a task group to identify ways to better address how to match agency requirements with program resources. At the end of 2003, NASA created the Space Communication Coordination and Integration Board to establish technical requirements for the integration of NASA’s space communications assets into a seamless communications architecture for the future. According to NASA officials, the Board is technical in nature and not intended to manage space communications, but rather focus on integrating the architecture. Further, officials said that no other agency-level entity reviews requirements for individual communications programs or establishes broader mission requirements for space communications. As a result, they informed us that program requirements, such as infrastructure and technology development needs, have consistently been given low priority by the agency. They said that the DSN program is forced to make tradeoffs to maintain functionality, but it is not able to fully address its requirements and has concerns about its ability to continue supporting the operations for which it is entrusted. Currently, identification of appropriate investment resources (in line with decisions made about the architecture) is performed by the mission directorate with responsibility over the program and the program’s customers. There is no overarching entity for space communications management at NASA to consider the specific investment needs of the programs and direct funding accordingly. And while all programs are supposed to consider the broader needs of the agency and other programs in their investment decisions, officials informed us that there is no formal oversight mechanism to ensure that investment decisions made at the program level are in line with those broader requirements. As a result of this mismatch between agency level requirements and investment decisions for the programs that support those requirements, NASA has limited ability to prevent competing programs from making investments that, while supporting individual program requirements, undercut broader agency goals. For example, several agency officials noted both the Deep Space Network and the Ground Network programs recently were on a path to develop separate array technologies to support overlapping requirements for the same lunar missions, which would have undercut agency efforts to create a seamless, integrated architecture for space communications and would have represented unnecessary duplication of effort and added costs. But officials said these pilot efforts were terminated after much of the planning for them had taken place. However, the termination was a result of budget constraints and lack of clearly defined requirements, as opposed to a decision by an authority with an agency wide investment perspective. In addition, another potential DSN customer—the Solar Dynamic Observatory—recognized that DSN could not provide it with the service it needed, so it invested in its own communications antennas to provide the coverage it needed. Such duplication undermines the original intent of DSN to be an efficient, single network for NASA’s deep space communications on Earth. During the course of our review, NASA established a task group to address how best to manage the agency’s space communications programs so program resources are invested in a way that supports agency wide goals. The task group has yet to make any recommendations to address these issues. Currently, the task group must consider two primary competing viewpoints within the agency. One viewpoint holds that the current structure of space communications, in which mission directorates and programs control resources, is ideal because it allows communications support to be controlled by the same entity that establishes and funds the programs that use the system. For example, DSN is funded by the Science Mission Directorate, which also supports the vast majority of missions that the DSN serves. Some agency officials believe that this approach provides better customer service, since resource trade-offs can be made by those closest to both the customers and the service provider. However, under this current structure, maintenance requirements for DSN have consistently been deemed a low priority. Alternatively, others in the agency point to the success of a more centralized space communications structure, as was in place before 1995. Under this structure, resource decisions can be made in light of an overall agency perspective on which communications program can best fulfill agency wide communications goals. However, one official suggested that under this structure, maintenance requirements for DSN could become an even lower priority as the requirements of other programs are considered. In the former case, a program like DSN must compete for funding against individual missions. In the latter case, a program like DSN will compete for funding against other space communications assets. By establishing DSN as the primary communications system for supporting deep space missions, NASA will be reliant on the system for mission successes—both now and in the distant future. By virtue of this reliance, NASA has a responsibility to ensure that the system is operationally sound and meets user needs. The system faces challenges that call into question how well it will continue to be able to adequately support deep space missions. The potential for more significant system failure and major disruption to the deep space exploration program, both manned and unmanned, looms large if nothing is done to address the condition of DSN. As NASA continues to depend on the program for meeting its deep space communications requirements, the program and the agency will have to determine what those requirements are and how they can best meet those requirements with a viable system for the future. Establishing these requirements in terms more comprehensive than just being able to provide coverage for 95 percent of committed time will provide for a better understanding of what is needed by the program. Furthermore, quantification and characterization of such requirements in more comprehensive terms will be critical to the development of a plan as required under the 2005 NASA Authorization Act. As NASA prepares to take on extensive exploration initiatives under the President’s Vision for Space Exploration, the agency needs to position itself to make investment decisions from an agency-wide perspective. Currently, because NASA does not consider program level requirements when planning agency wide commitments for space communications, many of these program requirements, such as infrastructure needs, are not being addressed, which means they will worsen and inhibit the agency’s ability to support future space exploration initiatives. Also, since space communications programs have the ability to direct resources to investments, investments made may not support agency wide requirements conducive to a broader and possibly more efficient space communications capability for the agency. As NASA begins to commit more resources to deep space exploration in the future, the agency must ensure that it properly addresses the communications needs of all of its missions and makes investments from that viewpoint. NASA has the opportunity to address this issue through a newly created task group charged with analyzing how this can best be achieved. To better position the Deep Space Network to meet existing workload challenges and prepare the network for future deep space communications responsibilities, we recommend that the NASA Administrator direct DSN to (1) identify total program requirements for deep space communications capabilities for the near and long term, in terms better defined than the single coverage commitment of 95 percent, (2) determine the extent to which the program’s current capabilities can support those identified requirements and (3) develop a plan to address any gap between those capabilities and requirements and identify the estimated costs of any enhancements needed. As NASA’s task group on space communications considers how program requirements can be better integrated into overall agency goals for space communications capabilities, we recommend that the NASA Administrator direct the group to consider the following in carrying out its task: (1) identify what priority program-level requirements have in agency-level decisions affecting space communications, (2) determine how program- level requirements for space communications programs can be identified and communicated to agency-level decision makers, and (3) establish how the agency can identify program-level investments needed to address program requirements that support agency wide goals for space communications and how to coordinate those investments to avoid duplication and additional costs. While considering these recommendations and the task at hand, the group should also consider the importance of having shared knowledge and communication about these issues openly with all entities involved. NASA concurred with our recommendations. In commenting on the draft of our report, NASA pointed out that it already had a plan in place that addresses our first set of recommendations, namely the need for the agency to identify all DSN requirements for the near and long-term, how it will meet those requirements, and identify costs associated with meeting those requirements. While we recognize that NASA has a DSN Roadmap, the agency still lacks a detailed strategy for addressing DSN needs for the future that includes all program requirements, i.e. deferred maintenance, in addition to the already projected mission needs. Furthermore, the DSN Roadmap does not include estimation of costs and does not address the impact of unmet needs on its ability to meet mission requirements. NASA also commented that the DSN has not been responsible for the loss of missions. Our report does not state that missions were lost because of the DSN. However, NASA officials provided GAO evidence that mission science had been lost as a result of disruptions in the operation of DSN, and that point is characterized in the report. As agreed with your offices, unless you announce its contents earlier, we will not distribute this report further until 30 days from its date. At that time, we will send copies to the NASA Administrator and interested congressional committees. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or lia@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are acknowledged in appendix III. To identify the challenges facing NASA’s Deep Space Network program in meeting its current and planned space communications workload, we performed the following: We obtained and analyzed NASA documents and briefing slides related to the operation and capabilities of the Deep Space Network, including budget submissions and funding breakouts, workforce projections, missions lists, fiscal year 2004 and fiscal year 2005 Program Operating Plans, the DSN Strategic Roadmap, mission agreements, the memorandums of agreement with the host countries of the foreign DSN sites, a 2004 NASA-wide facilities condition assessment, deferred maintenance information and work breakdown system data, risk assessments for various aspects of the network, return on investment analyses for various technology upgrades and system performance and reliability data, including records of downtimes. We reviewed the NASA Vision for Exploration roadmaps and the National Research Council reports on those roadmaps, the Vision for Exploration Architecture report, and NASA Strategic Plan for 2005 and Beyond for information about the role of DSN in the Vision. We also reviewed previous GAO reports on infrastructure investment, technology development and deferred maintenance. We interviewed NASA mission officials to receive their feedback on the performance of DSN, including performance shortfalls, in meeting their needs and collected information related to those specific missions. We also discussed the nature of challenges experienced by the program through interviews with NASA and Jet Propulsion Laboratory officials and DSN contractor personnel and received written and oral responses from all. To determine the extent NASA is integrating DSN into its space communications plans for the future, we performed the following: We collected and analyzed information related to space communications architecture management at NASA, including the NASA 4.0 Communication and Navigation Capability Roadmap, space communication architecture plans, descriptions of the various space communications assets intended to play a role in the future architecture, Memorandum of Agreement for the Management of NASA’s Space Communications Networks, and a description of the history of space communications management at NASA. We held discussions with NASA space communications officials about future space communications architecture requirements, what assets the architecture will include, and how its development is being managed by the Space Communication Coordination and Integration Board (SCCIB) and Space Communications Architecture Working Group (SCAWG). We reviewed the charter of the SCAWG. We also discussed the budget development and execution process for DSN at the Science Mission Directorate (SMD) level, and how that impacts integration of the DSN into the overall agency space communications architecture. We met with NASA’s Space Communications Organization Study Group, which was established during the course of our review, to discuss its task of identifying options for the management of space communications for the future of NASA space exploration. We also reviewed the Terms of Reference (TOR) for this group to better understand its goals and time frames. To accomplish our work, we visited and interviewed officials responsible for DSN operations at NASA Headquarters, Washington, D.C.; the Jet Propulsion Laboratory in Pasadena, Calif.; and ITT Industries contractor officials at their offices in Monrovia, Calif., and at the DSN site complex in Goldstone, Calif. At NASA Headquarters, we met with officials from the Science Mission Directorate, including lead representatives from the Deep Space Network program, the Exploration Missions Directorate and the Space Operations Mission Directorate, including the Space Communications Architecture Working Group. We also met with DSN mission officials from the Mars Rovers, Deep Impact, Cassini-Huygens, and Stardust programs. We conducted our review from May 2005 to April 2006 in accordance with generally accepted government auditing standards. In addition to the individual named above, Brendan Culley, James Morrison, Sylvia Schatz, Robert Swierczek, Trevor Thomson, Hai Tran and Thomas Twambly made key contributions to this report.
The President's Vision for Space Exploration calls for human and robotic missions to the Moon, Mars, and beyond. In response, over the next two decades, NASA may spend $100 billion on new technologies and facilities that will require reliable ground communications to achieve those missions. Presently, that communications capability is provided by NASA's Deep Space Network--a system of antennas located at three sites around the world. However, the Network faces challenges that may hinder its provision of current and future mission support. This report discusses (1) the significant operational challenges faced by the Deep Space Network and (2) the extent to which NASA is integrating the Network into its future communications plans. While NASA's Deep Space Network can meet most requirements of its current workload, it may not be able to meet near-term and future demand. The system--suffering from an aging, fragile infrastructure with some crucial components over 40 years old--has lost science data during routine operations and critical events. In addition, new customers find they must compete for this limited capacity, not just with each other, but also with legacy missions extended past their lifetimes, such as NASA's Voyager, that nonetheless return valuable science. Program officials doubt they can provide adequate coverage to an increasing set of new mission customers, especially if they increase dramatically under the President's Vision. The Deep Space Network's future utility is also in question because NASA does not currently match funding for space communications capabilities with agency wide space communications requirements. While NASA created an agency level entity to review the technical requirements for integrating assets like the network into an agency wide space communications architecture for the future, that entity does not address program level requirements nor influence investment decisions. Control over such requirements and funding remains with the mission directorates and programs themselves. This disconnect allows programs to invest in capabilities that may undercut agency wide goals for space communications. After this review was initiated, NASA began to study how to better manage this gap between agency-level requirements and program-level funding, but no recommendations for action have yet been proposed.
We found that BSEE leadership has started several initiatives to improve its safety and environmental oversight capabilities, but its limited efforts to obtain and incorporate input from within the bureau have hindered its progress. Since 2012, BSEE has sought to augment its annual inspection program with a risk-based inspection program, but limited efforts to obtain and incorporate input from experienced regional personnel have hindered BSEE’s ability to develop and implement the risk-based program. In 2012, BSEE began an initiative to develop an approach for conducting inspections of offshore facilities based on the level of risks they posed. However, to date, BSEE has not successfully implemented this supplemental risk-based inspection capability. BSEE leadership led the development of the risk-based program; however, according to officials, leadership developed the program with little input from regional personnel. Officials in the Gulf of Mexico region with knowledge and experience conducting previous risk-based inspection efforts told us that they were not apprised of key program products (e.g., a risk model developed by Argonne National Laboratory) until the products were well under development and that they were given little opportunity to provide comment on them. As a result, BSEE first identified deficiencies with its risk-based program during pilot testing in 2015, rather than working closely with experienced regional personnel earlier in the process to obtain their input to identify potential deficiencies and remediate them during program development. In turn, BSEE leadership’s limited efforts to obtain and incorporate input from regional staff and management during development of the program led to poor pilot results. In response, BSEE has changed the focus of the program and reduced expectations for its initial approach to risk-based inspections. In 2016, BSEE conducted an environmental stewardship initiative comprised of two simultaneous environmental risk reduction efforts, but we found that these efforts were overlapping, fragmented, and uncoordinated, which reduced the effectiveness of the initiative and hindered the implementation of identified improvements. These efforts were led and coordinated by BSEE leadership in the Environmental Compliance Division at headquarters, which BSEE created in 2015 to establish national strategic goals and procedures for the bureau’s environmental compliance activities. However, the efforts were overlapping because BSEE leadership tasked both with the same five objectives. The two efforts were also fragmented because BSEE leadership did not effectively coordinate their execution, which hindered information sharing between them that could have enhanced their value. Moreover, because the efforts were uncoordinated, they resulted in the inefficient use of resources. In our report being released today, we recommended that the Secretary of the Interior direct the Assistant Secretary for Land and Minerals Management, who oversees BSEE, to establish a mechanism for BSEE management to obtain and incorporate input from bureau personnel and any external parties, such as Argonne National Laboratory, that can affect the bureau’s ability to achieve its objectives. In its written response to our report, Interior neither agreed nor disagreed with our recommendation. Interior stated that the recommendation reflects an ongoing BSEE commitment and that BSEE and Interior agree with the concept laid out therein. However, Interior’s comments do not discuss any specific actions taken or under way to do so. Without higher-level oversight within Interior establishing a mechanism for BSEE management to obtain and incorporate input from personnel within the bureau and any external parties that can affect the bureau’s ability to achieve its objectives, BSEE’s risk-based inspection program and Environmental Stewardship efforts are likely to experience continued implementation and efficacy problems. We found that since 2013, BSEE has begun four strategic initiatives to improve its internal management—two to improve its decision-making capabilities and two to enhance communication and transparency—but their successful implementation has been hindered by limited leadership commitment and not addressing factors contributing to trust concerns. In 2013, BSEE began an initiative to develop an Enterprise Risk Management (ERM) framework but has not fully implemented it as a management tool. BSEE has made some progress over the past 3 years in implementing an ERM framework but has not completed the actions necessary to fully implement it. In conjunction with a contracted ERM support consultant, BSEE developed an iterative ERM cycle that includes six steps. In 2014, BSEE identified and prioritized 12 strategic risks that cover the lifecycle of BSEE operations. BSEE planned to verify the prioritization of its top several strategic risk treatments by July 2016 but did not do so. BSEE officials told us that the bureau halted ERM implementation while it acquired automated ERM software. The officials said BSEE planned to finalize a plan for its prioritized risk treatments by August 2016 but did not do so because of the temporary halt to ERM implementation. Likewise, they said BSEE intended to promulgate a monitoring plan by October 2016 but did not do so because of the aforementioned temporary halt to ERM implementation. However, in November 2016, BSEE determined that it would reinitiate ERM implementation simultaneous to the implementation of software. In 2014, BSEE began an initiative to develop performance measures for its programs but has not implemented any measures. BSEE’s October 2012 Strategic Plan-Fiscal Years 2012-2015 stated that the bureau must develop performance measures to assess the results of its programmatic efforts as well as its ability to reduce the risks of environmental damage and accidents. BSEE’s initiative to develop performance measures has been comprised of three sequential efforts, in 2014, 2015, and 2016. For the first two efforts, the bureau contracted with a consultant. BSEE terminated the first effort, and although the consultant delivered a report identifying 12 performance measures during the second effort, BSEE officials said they were not implementing them due to a variety of factors, including data availability limitations. For its third effort to develop performance measures in 2016, BSEE headquarters officials told us that this initiative, which is being conducted internally by BSEE personnel, represents the beginning of a multi-year effort to implement a performance management system. BSEE initially planned to finalize its internally developed list of performance measures in February 2016 but did not meet this deadline. In December 2016, BSEE completed a report that discusses 17 performance measures and the bureau’s plans for future iterations of their development. We have previously reported on BSEE’s struggles to effectively implement internal management initiatives. Specifically, in February 2016, we found that since its inception in 2011, BSEE had made limited progress in enhancing the bureau’s investigative, environmental compliance, and enforcement capabilities. Likewise, with regard to its ongoing strategic initiatives to improve its decision-making capabilities, more than 3 years have passed since BSEE initiated the development of its ERM framework, more than 2 years have passed since the bureau prioritized the strategic risks it faces, and more than 4 years have passed since it identified the development and implementation of performance measures as an organizational need. In that time, BSEE initiated several efforts to develop and implement such measures, and although BSEE has developed measures, it has yet to fully implement any. One of our five criteria for assessing whether an area can be removed from our high-risk list is leadership commitment—that is, demonstrated strong commitment and top leadership support. An example of leadership commitment is continuing oversight and accountability, which BSEE leadership has not demonstrated for implementing internal management initiatives, as evidenced by its limited progress in implementing key strategic initiatives as well as its inability to address long-standing oversight deficiencies. In our report being released today, we recommended that the Secretary of the Interior direct the Assistant Secretary for Land and Minerals Management, who oversees BSEE, to address leadership commitment deficiencies within BSEE, including by implementing internal management initiatives and ongoing strategic initiatives (e.g., ERM and performance measure initiatives) in a timely manner. Interior neither agreed nor disagreed with our recommendation. Interior stated that the recommendation reflects an ongoing BSEE commitment and that BSEE and Interior agree with the concept laid out therein. However, Interior’s comments did not discuss specific actions taken or planned to meet the intent of our recommendation. Without higher-level oversight within Interior addressing BSEE’s leadership commitment deficiencies— including by implementing internal management initiatives and ongoing strategic initiatives—in a timely manner, the bureau is unlikely to succeed in implementing such initiatives. In February 2016, BSEE announced an initiative to assess internal communications and develop an employee engagement strategy. BSEE employee engagement initiative documentation identifies the need to enhance communication vertically and horizontally across the bureau. BSEE leadership’s safety and environmental stewardship initiatives have had limited success, largely due to poor communication and coordination between headquarters and the regions. BSEE officials from across the bureau told us that the poor communication between headquarters and the regions led to a deficit of trust vertically throughout the bureau. They also told us that because BSEE headquarters was newly established as part of the reorganization of MMS in 2010 following the Deepwater Horizon incident, not many existing relationships between headquarters and regional personnel existed. The data collection plan for this employee engagement initiative focused on conducting outreach across the bureau to identify the means by which BSEE personnel prefer to receive information—for example, town hall meetings, BSEE’s website, or e-mail. BSEE conducted this outreach but as of November 2016 had not developed an employee engagement strategy—although its original target completion date was April 2016— and it is unclear when it will do so. In September 2016, BSEE decided to conduct a second round of outreach across the organization by spring 2017 to review feedback from the initial outreach, discuss next steps, and provide guidance on existing communications resources. To address trust concerns that exist between headquarters and the field, we recommended in our report being released today that the Secretary of the Interior direct the BSEE Director to expand the scope of its employee engagement strategy to incorporate the need to communicate quality information throughout the bureau consistent with federal standards for internal control. Interior neither agreed nor disagreed with our recommendation. Interior asserted that, since receiving our draft report for review, BSEE has completed the assessment and analysis of employee feedback and developed an engagement plan, but Interior did not provide documentary evidence of this plan or what it entails. Without providing evidence of BSEE’s activities, we could not confirm that any action had been taken and continue to believe that BSEE should expand the scope of its employee engagement strategy. In addition, the bureau’s Integrity and Professional Responsibility Advisor (IPRA) is responsible for promptly and credibly responding to allegations or evidence of misconduct and unethical behavior by BSEE employees and coordinating its activities with other entities, such as Interior’s Office of Inspector General (OIG). Senior BSEE officials from across the bureau stated that the IPRA function is critical to bolstering trust within the bureau because personnel need to have a functioning mechanism to which they can report potential misconduct by other employees. To increase transparency and consistency in how IPRA cases are handled following the completion of an investigation report, BSEE conducted a pilot initiative in 2016 to assess the types of allegations of misconduct being reported to the IPRA as well as the frequency with which the IPRA referred such allegations to other entities. In August 2016, BSEE determined that the majority of incoming allegations were being directed to the appropriate office for action. However, BSEE’s pilot initiative did not address unclear and conflicting guidance that could undermine organizational trust in how the IPRA addresses allegations of misconduct. Specifically, we found that the Interior Department Manual and IPRA guidance do not specify criteria for the severity thresholds for allegations that are to be referred to the OIG. As a result, the boundaries of IPRA responsibility are unclear. Additionally, BSEE’s pilot initiative did not address IPRA guidance that conflicts with the reporting chain established by the Interior Department Manual and BSEE’s organization chart. Some BSEE regional officials told us that the uncertainty about how the IPRA reports allegations to the OIG as well as its reporting structure led them to question the independence of IPRA activities, and they expressed concern that the IPRA could be used to retaliate against employees, which has undermined organizational trust in its activities. Under the federal standards of internal control, management should design control activities to achieve objectives and respond to risks, such as by clearly documenting internal controls. While BSEE has documented its policies, they are not clear. In our report being released today, we recommended that the Secretary of the Interior direct the BSEE Director to assess and amend IPRA guidance to clarify (1) severity threshold criteria for referring allegations of misconduct to the OIG and (2) its reporting chain. Interior neither agreed nor disagreed with our recommendation but stated that contrary to our draft report, the Interior Department Manual includes severity threshold criteria for referring allegations of misconduct to the OIG. We believe that the language in the Interior Department Manual does not provide the specificity needed to adequately define the boundaries of IPRA responsibility. Additionally, Interior stated that the IPRA reports to the BSEE Director, consistent with the reporting chain established in the bureau’s organizational chart and the Interior Department Manual. However, the BSEE Director told us that, in practice, the IPRA often reports to the BSEE Deputy Director rather than the Director. Moreover, our work found that the decision-making process of the IPRA Board—whereby the Board determines how to respond to an investigation without consulting the Director—does not align with the IPRA’s prescribed reporting chain. Without assessing and amending its IPRA guidance to clarify (1) the severity threshold criteria for referring allegations and (2) the IPRA reporting chain, BSEE risks further eroding organizational trust in the IPRA to carry out its mission to promptly and credibly respond to allegations or evidence of misconduct by BSEE employees. Chairman Farenthold, Ranking Member Plaskett, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff have any questions about this testimony, please contact Frank Rusco, Director, Natural Resources and Environment, at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Richard Burkard, Cindy Gilbert, Matthew D. Tabbert, Kiki Theodoropoulos, and Daniel R. Will. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony summarizes the information contained in GAO's March 2017 report, entitled Oil and Gas Management: Stronger Leadership Commitment Needed at Interior to Improve Offshore Oversight and Internal Management ( GAO-17-293 ). The Department of the Interior's (Interior) Bureau of Safety and Environmental Enforcement (BSEE) leadership has started several key strategic initiatives to improve its offshore safety and environmental oversight, but its limited efforts to obtain and incorporate input from within the bureau have hindered its progress. For example, to supplement its mandatory annual regulatory compliance inspections, in 2012, BSEE leadership began developing a risk-based inspection initiative to identify high-risk production facilities and assess their safety systems and management controls. During pilot testing in 2016, several deficiencies--including the usefulness of its facility risk-assessment model and unclear inspection protocols--caused BSEE to halt the pilot. According to bureau officials, during the development of the initiative, BSEE headquarters did not effectively obtain and incorporate input from regional personnel with long-standing experience in previous risk-based inspection efforts, who could have identified deficiencies earlier in the process. GAO previously found that when implementing large-scale management initiatives a key practice is involving employees to obtain their ideas by incorporating their feedback into new policies and procedures. Instead, BSEE leadership appears to have excluded the input of regional personnel by, for example, not incorporating input beyond the risk-assessment tool when selecting the first pilot facility, even though it was prescribed to do so in the bureau's inspection planning methodology. This undercut the pilot effort, raising questions about whether the bureau's leadership has the commitment necessary to successfully implement its risk-based program. Without higher level leadership within Interior establishing a mechanism for BSEE to obtain and incorporate input from personnel within the bureau, BSEE's risk-based inspection initiative could face continued delays. Similarly, since 2013, BSEE leadership has started several key strategic initiatives to improve its internal management, but none have been successfully implemented, in part, because of limited leadership commitment. For example, BSEE's leadership identified the importance of developing performance measures in its 2012-2015 strategic plan. BSEE began one of three attempts to develop performance measures in July 2014 by hiring a contractor to develop measures, but the bureau terminated this contract in January 2015 after determining a need to complete its internal reorganization before developing such measures. A second effort to develop performance measures started in December 2015, using the same consultant, and yielded 12 performance measures in March 2016, but BSEE did not implement them, in part, because data did not exist to use the measures. By the time BSEE received this consultant's report, it had already begun a third effort to internally develop performance measures; as of November 2016 had identified 17 draft performance measures, but BSEE leadership missed repeated deadlines to review them. BSEE officials told GAO that after leadership approval, the bureau plans to pilot these measures and develop others. BSEE leadership has not demonstrated continuing oversight and accountability for implementing internal management initiatives, as evidenced by its limited progress implementing key strategic initiatives. Without higher-level oversight within Interior addressing leadership commitment deficiencies within BSEE, the bureau is unlikely to succeed in implementing internal management initiatives.
Mexico’s Maquiladora program has been a central feature of the U.S.- Mexico border. The U.S.-Mexico border stretches nearly 2,000 miles, from the Pacific Ocean in California to the Gulf of Mexico in Texas. Four U.S. states (Arizona, California, New Mexico, and Texas) and six Mexican states (Baja California, Chihuahua, Coahuila, Nuevo Leon, Sonora, and Tamaulipas) make up the border. Texas contains the longest section of the U.S. border with Mexico, with several large and numerous small border crossings across the Rio Grande. Compared with Texas, California’s border with Mexico is relatively short, but it includes San Diego–Tijuana, the single busiest U.S.-Mexico border crossing. Arizona’s principal border crossing with the Mexican state of Sonora at Nogales plays a significant role in agricultural trade. The relatively small border crossings between New Mexico and Mexico reflect the sparsely populated areas in that region of the border. Figure 1 shows the U.S.-Mexico border, including all U.S. and Mexican border states, some Mexican border cities with varying concentrations of maquiladora plants, and some ports of entry on the U.S. side of the border. During the 1990s, the population along the border experienced significant growth. On the U.S. side, the population increased by 21 percent, considerably more than the overall U.S. population, which grew by 13.2 percent. Some cities on the U.S. border experienced significant increases in population, such as Yuma, Arizona, and McAllen, Texas—respectively, the third and fourth fastest growing metropolitan areas in the United States. Population on the Mexican side of the border increased even more rapidly, growing by 32 percent between 1990 and 2000. The majority of the border’s residents live in communities along the border that are composed of twin cities—a city on each side of the border—such as San Diego–Tijuana and El Paso–Juarez. The San Diego–Tijuana area alone has a combined population of about 4 million, and the El Paso–Juarez area has a population of 1.9 million. The Maquiladora program was first established by the government of Mexico in 1965 as part of the Border Industrialization Program (BIP) and maquiladoras have been a driving force in the development of the U.S.- Mexico border region. Under the BIP, Mexico encouraged foreign corporations to establish operations along the northern border to provide employment opportunities for Mexican workers displaced after the termination of a temporary cross-border work arrangement known as the Bracero Program. Also known as “in-bond” plants, maquiladoras were allowed to import temporarily, on a duty–free basis, raw materials and components for processing or assembly by Mexican labor and to re-export the resulting products, primarily to the United States. The maquiladoras have undergone a dynamic evolution over the last four decades. In the mid-1960s, maquiladoras consisted primarily of basic assembly operations taking advantage of Mexico’s low labor costs. By the 1980s, U.S. multinationals representing various industrial sectors established maquiladora plants along the U.S.-Mexico border. Japanese and European companies also established maquiladora plants in Mexico to compete in the U.S. market. Since the 1980s, some firms moved from low- skilled assembly work to more advanced manufacturing operations. Researchers from Mexico’s Colegio de la Frontera and San Diego State University note that the number of “technical workers” employed by maquiladoras increased significantly from the early 1980s to the 1990s. Some maquiladoras now employ workers in development and design as well as manufacturing. For example, Delphi Automotive in Juarez, the largest private employer among maquiladoras in Mexico, now has a sophisticated research and development center that employs hundreds of highly skilled workers and engineers. Over the years, as maquiladoras evolved and expanded, the term maquiladora has come to be used loosely to refer to almost any subsidiary plant of a foreign company involved in export from Mexico, particularly those located along the U.S. border. However, the Maquiladora program continues to be quite distinct from other efforts initiated by the Mexican government to encourage exports. Firms must register with the government of Mexico to be considered maquiladoras and, once registered, are eligible for several key benefits, such as preferential tariffs on inputs and machinery, and simplified Mexican customs procedures. In this report, we define maquiladoras as those firms officially participating in Mexico’s Maquiladora program. In addition to the Maquiladora program, the U.S.-Mexico trade relationship has also been influenced by other important developments such as NAFTA. NAFTA was concluded between the United States, Mexico and Canada in 1992 and entered into force on January 1, 1994. This agreement provided, among its other provisions, for the elimination of tariffs and other barriers to U.S.-Mexico bilateral trade by 2008. It also required Mexico to change certain provisions of the Maquiladora program, such as elimination of duty- free benefits for imports of components from non-NAFTA countries. U.S.- Mexico trade has expanded sharply since NAFTA’s inception. Much of this trade involves “production sharing,” whereby final goods are produced with parts, labor, and manufacturing facilities from the United States and Mexico. Because it enables firms to increase specialization, take advantage of low labor costs in Mexico, and attain other efficiencies, production sharing is a key benefit to U.S. companies under the Maquiladora program. A variety of social and economic factors create strong linkages between communities on both sides of the U.S.-Mexico border, and maquiladoras play a critical part in this interdependence. Residents in the twin cities cross the border about one million times every day to work, shop, attend classes, visit family, and participate in other activities. Maquiladoras have increased trade between the United States and Mexico and have helped to develop the economies of several border regions. While communities along the U.S.-Mexico border share certain traits, each region is distinct. A wide range of social ties—educational, political, cultural, and familial— contribute to integration along the U.S.-Mexico border. For example, certain U.S. universities in border cities offer combined degrees or exchange programs with their counterparts on the Mexican side. In some schools, such as the University of Texas at El Paso and the University of Texas–Pan American, Mexican nationals cross the border regularly to attend classes. Political interaction and cooperation between local authorities of twin cities enhance integration. Cultural and family ties also contribute significantly to integration at the border. The U.S. counties with the highest concentration of Hispanics are located along the southwest border, and by far most of the Hispanics in southern border states are of Mexican descent. Trade and retail sales contribute to economic interdependence at the border. Approximately $200 billion in trade went through the U.S.-Mexico border in 2002. Much of U.S.-Mexico trade occurs between border states. For example, 62 percent of U.S. exports to Mexico originated in Texas, California, Arizona, and New Mexico; of this, 70 percent was destined for Mexican border states. Research by the Federal Reserve Bank of Dallas indicates that trade between the United States and Mexico has positive effects on border communities, because U.S. border cities typically provide a variety of services such as transportation and customs brokerage. Retail sales to Mexican nationals also contribute significantly to the economies of cities on the U.S. side of the border. According to one estimate, retailers in Texas annually make an estimated $15 billion in sales to Mexican shoppers. In McAllen, Texas, 35 percent, or about $700 million worth, of retail sales are made to Mexican nationals. Residents from Tijuana make 1.5 million trips per month into the San Diego area, mainly to shop. In El Paso, Juarez residents account for more than 20 percent of retail sales. On the other hand, because of the high cost of pharmaceuticals in the United States, a growing number of U.S. residents regularly cross the border into Mexico to purchase prescription drugs. Maquiladoras import most inputs from the United States and export most of what they produce back to the United States. Growth in U.S.–Mexico trade and economic interdependence at the border during the last decade can be explained to a great degree by the participation of maquiladoras in supplying a strong U.S. market during the 1990s. Mexican exports increased by about 340 percent between 1993 and 2001, in large part because maquiladora-related exports increased by over 400 percent during this time, according to a report by the Mexican Commission on Northern Border Affairs. By 2001, maquiladoras accounted for 41 percent of total Mexican trade with all countries – 34 percent of Mexico’s imports and 48 percent of its exports (see fig. 2). Trade with the United States makes up a significant share of maquiladora trade. In 2001, 79 percent of maquiladora imports of components and parts for production were from the United States and 98 percent of their exported products were destined for the U.S. market. Maquiladora trade between the United States and Mexico totaled about $121 billion in 2001, with maquiladora exports ($75 billion) accounting for more than half of Mexico’s total exports to the United States. Border cities are typically seen as the primary beneficiaries of growing U.S.-Mexico trade. However, states such as Florida, Tennessee, and Ohio, which doubled their exports to Mexico during the second half of 1990s, have also benefited from growing U.S.-Mexico trade. Furthermore, maquiladoras are directly connected to U.S. companies through ownership and production ties. The list of Mexico’s top 100 maquiladora employers includes such U.S. firms as Delphi, RCA, Ford Motor Company, Tyco, General Electric, General Instruments, Johnson & Johnson, and ITT. All told, 79 percent of the top 100 maquiladora employers are from the United States. Maquiladoras are important to the United States because they are a strategic means by which U.S. companies stay competitive in the global marketplace. By offering lower production costs, maquiladoras enable U.S. companies to produce goods more cheaply in Mexico than in the United States. In essence, maquiladoras and U.S. companies are part of a greater production-sharing model, which is an important part of overall North American production. Moreover, more than 26,000 U.S.-based companies, located mainly in the Midwest, supply maquiladoras with raw materials and components. The Mexican border region has benefited in terms of job creation from the dominant presence of maquiladoras on the Mexican side of the border. Overall, 77 percent of all maquiladora establishments are located in the six Mexican border states shown in figure 3. Also, about 83 percent of maquiladora employment was located in border states. During most of the 1990s, maquiladoras represented more than half of the industrial activity in the states of Chihuahua and Tamaulipas. During the same time period, the maquiladoras represented nearly three quarters of industrial production in the state of Baja California, which contained almost one third of Mexico’s maquiladora firms. Cities on the U.S. side of the border have benefited from the large flow of trade created by maquiladoras. Between 1990 and 2002, more than half a million jobs were added to the U.S. border region, including jobs in services, retail trade, finance, and transportation, and after 1995, employment growth in the U.S. border region exceeded the U.S. national average (see app. I for details). The employment gains are particularly notable, because the border region historically has had high rates of unemployment. Some studies have outlined the effect of overall border economic trends on local border communities. For example, researchers estimated that in one Texas border community, in 2001, services and supplies purchased by maquiladoras amounted to $136 million and a total of 32,577 jobs were sustained by maquiladoras and related manufacturing activity. The same researchers estimated that 15 percent of maquiladora workers’ salaries was spent in the region on goods and services. In one Arizona border community, researchers surveyed maquiladora workers and found that workers who crossed the border to shop made an average of 5.5 trips a month and spent about $35 on each trip. Almost one third of retail sales in the same Arizona community are attributed to Mexican nationals, according to local sources. Despite their role in generating employment in Mexico, the maquiladoras’ benefit to the country remains a subject of some debate. Some express reservations about the maquiladoras’ ability to generate economic development for Mexico, since these plants have generally been unable to establish a network of domestic inputs providers or create significant linkages to the internal Mexican economy. In April 2002, for example, the former Mexican Foreign Minister noted that without proactive Mexican government policy to set up domestic suppliers, the benefits of the maquiladora industry would never extend beyond the border. In addition, critics in the environmental and labor movements on both sides of the border also assail these plants. Some environmental groups claim that maquiladoras are responsible for the growing pollution problem in the border region. Similarly, some labor organizations criticize the maquiladoras for the low wages paid to workers and for allegedly poor working conditions. Although communities along the U.S.-Mexico border share certain traits, they are also quite distinct. The level of integration between cross-border twin cities depends on location, population, economic profile, and cross- border political cooperation. We observed some of these differences during fieldwork in three border areas: McAllen–Reynosa, El Paso–Ciudad Juarez, and San Diego–Tijuana. McAllen–Reynosa. McAllen and Reynosa are economically interdependent. Both are medium-size cities, McAllen with a population of about 569,000, and Reynosa with about 420,000, and there are no other sizeable urban areas nearby on either side of the border. Officials at the McAllen Economic Development Corporation (MEDC) capitalized on the interdependence between McAllen and Reynosa and incorporated it into their economic strategy for the region starting in 1988. At that time, McAllen had high unemployment and Reynosa’s economy was based on subsistence farming. Working with political leaders in McAllen and Reynosa, MEDC developed a strategy based on promoting industrial development in Reynosa, recognizing that if companies opened maquiladoras there, McAllen would benefit by providing inputs and offering management, engineering, warehousing, trucking, legal, and accounting services. In the 14 years since its establishment, MEDC has recruited 178 companies to the area, in diverse manufacturing sectors, such as electronics, auto parts, and telecommunications. El Paso–Ciudad Juarez. Although El Paso and Ciudad Juarez are also closely integrated, such ties have developed differently than in McAllen– Reynosa. Ciudad Juarez is a larger metropolitan area, with a population of 1.2 million, and it is home to more maquiladora employees than any other Mexican city. On the U.S. side, El Paso and the neighboring communities in southern New Mexico are much smaller. El Paso’s economy has been shaped by economic activity in Ciudad Juarez, especially that of providing services to maquiladoras. In addition, Juarez residents contribute to El Paso’s economy by purchasing items ranging from cars to clothing and services such as financial and health services. In 2001, there were approximately 46 million northbound crossings via the three international bridges that connect the two cities. However, business leaders and other observers whom we met in the El Paso–Juarez area frequently noted that integration and economic dependence between El Paso and Juarez occurred spontaneously, rather than by design. The development of maquiladoras on the Mexican side occurred much earlier than in Reynosa and was largely attributed to individual efforts by entrepreneurs in Juarez and El Paso that began in the 1960s, rather than to a collective vision. However, in recent years, in Santa Teresa, a nearby border community in southern New Mexico, developers created a strategic plan to build an industrial park as a supplier base, with warehouse and distribution facilities, to service maquiladoras. The Santa Teresa port of entry opened 11 years ago. Developers envisioned that this border crossing would serve as an alternate point of entry to El Paso for cross border trade. San Diego–Tijuana. The dynamics of integration between San Diego and Tijuana are notably different from other cross-border twin cities. In this area, economic dependence is more one-sided. Unlike El Paso or McAllen, San Diego is a large metropolitan area in its own right, with a population of close to 3 million. Many of the major economic activities in San Diego, including defense and space manufacturing, biosciences, and tourism, are not directly connected to Tijuana. While Hispanics account for at least 50 percent of the population in most U.S. counties along the Southwest border, they account for only about 27 percent of the population in San Diego County, suggesting lower levels of family ties or connections to Mexico. In contrast, Tijuana, with a population of about 1.2 million, is heavily dependent on maquiladoras, and the city is closely tied to the U.S. market. In addition to U.S. companies, Tijuana has also been the preferred location for Japanese and Korean maquiladora investments, which have made this area the world’s leading producer of color televisions. More than 600 maquiladora plants, employing approximately 150,000 people are located in Tijuana. Moreover people in Tijuana are more likely to cross the border to shop or do business in San Diego than vice versa; in fact, it is estimated that two out of three residents of San Diego have never been to Tijuana. In contrast, Tijuana residents spend between 3 to 5 billion dollars in purchases in the San Diego region, mostly in the communities adjacent to the border. In addition, 7 percent of economically active people in Tijuana work in San Diego, earning an estimated $650 million a year in wages and salary income. Maquiladora production and employment grew rapidly throughout the 1990s but declined sharply after October 2000. Within the diverse maquiladora sector, the decline was particularly steep in certain industries and in some border cities. Overall, Mexican manufacturing production in the border region also declined and cross-border trade flows fell. At the same time, U.S. border employment in manufacturing and certain other trade-related sectors contracted. Nevertheless, the U.S. border region continued to experience stronger employment growth than did the United States as a whole. During the 1990s, maquiladoras proved to be one of the more dynamic components of Mexican manufacturing. Maquiladora production increased by 197 percent from January 1993 until its peak in October 2000, while overall manufacturing production in Mexico increased by only 58 percent in the same time period (see fig. 4). During that time period, maquiladora employment tripled, adding more than 900,000 jobs to the Mexican economy. In 2000, maquiladoras accounted for about 4 percent of total employment and about 20 percent of manufacturing employment in Mexico. With respect to employment, most major Mexican border cities and industrial sectors experienced growth in maquiladora employment over the decade, although some grew faster than others. For example, Tijuana and Mexicali tripled their maquiladora employment, and the electronics industry more than doubled its maquiladora employment in the border region. The electronics industry, which was already the largest maquiladora employer, added more than 200,000 jobs in the border region during the 1990s. For the Mexican border region as a whole, maquiladora employment rose 145 percent—from 342,555 in January 1990 to 839,200 in October 2000 (see app. V, table 8, for more information). While maquiladoras have typically been concentrated in the border region, maquiladora employment growth throughout the rest of Mexico was actually higher than in the border region during the 1990s. Growth in the nonborder region was particularly strong in the textile and apparel sector, in which employment rose in the nonborder region from about 22,000 in 1990 to about 224,000 jobs in 2001 (fig. 5). As a result of the stronger growth in the nonborder region, the share of textile and apparel maquiladora employment in the border region fell from 49 percent in 1990 to 17 percent in 2001. Much of the investment in the apparel sector occurred in anticipation of duty-free treatment for most U.S. imports of apparel from Mexico under NAFTA in 1999. After growing since the program’s inception over 35 years ago, particularly in the 1990s, Mexican maquiladora production and employment began to decline sharply in late 2000. Maquiladora production declined about 30 percent from late 2000 to early 2002. At the same time, maquiladora employment contracted about 20 percent, losing nearly 290,000 jobs nationally, about 174,000 of which were located in the border region. Similarly, the number of maquiladora establishments (factories) in operation began to decline as well (see fig. 6). Nevertheless, even with the pronounced declines, the overall numbers of maquiladora employees remain at levels similar to those in 1998–1999. While the Mexican maquiladora downturn was evident both nationally and in the border region, certain industries experienced larger declines (see fig. 7). For instance, in the border region, the electronics industry experienced one of the steepest and largest maquiladora employment declines of any industrial sector, contracting by 31 percent and losing more than 112,000 jobs in the 2-year period between October 2000 and October 2002. In contrast, the automobile and auto parts industry experienced a less severe maquiladora employment decline of 13 percent (about 24,000 jobs) in less than a year, before resuming some growth in November 2001. Textiles and apparel also experienced a steep employment decline, falling by 26 percent and losing more than 12,000 jobs. Nationally, the textile and apparel industry lost more than 70,000 jobs. In all other border industrial sectors combined, maquiladora employment declined by about 16 percent over a little more than a year but has grown by about 4 percent since January 2002. As figure 8 illustrates, the decline in maquiladora employment also affected cities in the Mexican border region differently. The two largest border cities, Juarez and Tijuana, both experienced significant declines in maquiladora employment, accounting for over half of the total jobs lost in the border region. After peaking in October 2000, by December 2002, maquiladora employment had fallen 27 percent in Juarez and 30 percent in Tijuana. The smaller city of Nogales, Sonora, experienced one of the sharpest percentage changes in maquiladora employment in the border region, declining by 44 percent. In contrast, the city of Reynosa experienced a decline of only about 5 percent between September and December 2000, and its maquiladora employment has since rebounded, with 7 percent growth since January 2001. Reynosa’s decline in electronics and auto parts employment was much less severe than other cities. The decline in Mexico’s maquiladora production contributed to a decline in overall manufacturing production in Mexico’s border region. Figure 9 shows the growth of manufacturing production for three Mexican border states: Baja California, Coahuila, and Sonora. Baja California, the state with the largest share of maquiladoras, grew more rapidly than the other border states but also experienced the largest decline in overall manufacturing production after October 2000. Similarly, manufacturing production in Coahuila, Nuevo Leon (not shown), and Sonora also experienced downturns beginning in late 2000 and early 2001. During the maquiladora decline, exports, imports, and overall trade through U.S.-Mexico land border ports also dropped. The value of cross- border trade dropped 5 percent in 2001 and remained flat in 2002, owing in large part to the 10 percent decline in U.S. exports to Mexico through these ports. Although each of the four major land border ports experienced some decline, Nogales experienced the greatest decline, losing about 20 percent of its value between 2000 and 2002 (see app. IV, table 7, for levels of U.S. trade with Mexico through the four main land border points). Maquiladoras, which accounted for 40 percent of U.S. exports to Mexico and 54 percent of Mexican exports to the United States in 2001, contributed to this decline. The decline in Mexico’s maquiladoras was also felt on the U.S. side, as manufacturing employment in border municipalities declined by 6 percent overall from 2000 through 2002. Other U.S. sectors related to trade also experienced declines in employment at the border. U.S. border employment in transportation and public utilities, which includes trucking and warehousing, was down 4 percent, and employment in wholesale trade was down 3 percent overall. Similar to the maquiladora employment declines in Mexico, employment declines on the U.S. side of the border also varied by region. For example, manufacturing employment declined by 18 percent overall in Texas’ border cities, and employment declines in wholesale trade and transportation and in public utilities were more pronounced in Arizona. (App. I provides a detailed analysis of employment trends in the U.S. border region.) Despite the contractions in manufacturing and certain other trade-related sectors, other sectors in the U.S. border region grew. As a result, total nonagriculture-related employment in the border area grew by 4 percent even after the U.S. economic slowdown began in 2000 and national employment contracted 1 percent through 2002. Some border metropolitan areas maintained even stronger employment growth. For example, the McAllen area grew by 9 percent between 2000 and 2002, while Laredo grew by 6 percent, and San Diego and Las Cruces grew by 5 percent each over the same period. On the other hand, El Paso’s overall nonfarm employment fell, primarily because its mix of industries is weighted towards sectors that have been shrinking (see app. I for details). The decline in maquiladora production and employment since the last quarter of 2000 is attributable to both cyclical and structural factors. Government researchers, academicians, economic studies, and industry representatives agree that the cyclical downturn in the U.S. economy has been a primary factor in the decline. However, industry sources and other experts emphasized that the maquiladoras have also been adversely affected by structural factors, such as increased competition in the U.S. market, particularly from China, Central America and the Caribbean, and by the strength of the Mexican peso, which has further eroded the maquiladoras’ competitiveness. Changing Mexican tax policies have also contributed to the maquiladora decline by creating a climate of uncertainty for foreign investors. Meanwhile, owing to commitments undertaken under NAFTA, Mexico has phased out some of the key benefits of the Maquiladora program. It is clear from our research that all of these factors were at work before and during the recent maquiladora downturn, and that each was changing in a direction adverse for maquiladora production and employment. However, the sheer number of simultaneous changes over a relatively brief period makes it difficult to isolate or quantify the impact of individual factors. Although many government, academic, and industry sources generally refer to the cyclical downturn in the U.S. economy as a principal factor in the decrease in maquiladora employment and production since the last quarter of 2000, there is no such agreement on the relative importance of other factors associated with the decline of the maquiladoras. Therefore, the order in which we present these other factors is generally based on the results of our semistructured interviews with industry associations (see app. VI). In explaining the decline in maquiladora production and employment beginning in the last quarter of 2000, government, academic, and industry sources generally emphasized the role of the downturn in the U.S. economy. Of the 23 industry association representatives we interviewed whose membership had experienced a decline in production or employment, about three-quarters cited the recent downturn in the U.S. economy as a major factor. As noted earlier in this report, maquiladora production is often linked to U.S. manufacturing through production- sharing arrangements. In fact, about 98 percent of maquiladora production is destined for the U.S. market. Thus, it is not surprising that the maquiladoras are very sensitive to fluctuations in U.S. manufacturing and demand. Our analysis of economic data supports the conclusion of experts and interviewees, demonstrating that historically maquiladora employment typically grows when the overall U.S. economy expands and is negatively influenced when the U.S. economy slows down (see app. II for a discussion of the effect of the economic downturn in the United States on employment for various maquiladora industrial sectors). Moreover, maquiladora employment has been even more sensitive to changes in U.S. manufacturing production, particularly in sectors such as textiles and autos, and a sharp drop in U.S. manufacturing has characterized the present U.S. economic slowdown. As figure 10 illustrates, maquiladora employment shows a correlation with U.S. economic performance over the past two decades. On average, maquiladoras added almost 118,000 employees annually from 1995 to 2000. During this period, U.S. annual economic growth averaged 3.6 percent. However, in 2001, as U.S. economic growth slowed to 1.4 percent, the maquiladoras lost nearly 229,000 jobs. Moreover, although the Mexican economy as a whole is very closely linked to that of the United States, the maquiladoras appear to have been affected by the U.S. economic slowdown more severely than the Mexican economy overall. While Mexico’s economy contracted .2 percent in 2001, it resumed growth at .7 percent in 2002. However, the maquiladora sector declined both years 9.2 percent and 8.3 percent, respectively. Of the industry associations indicating that their membership had experienced a decline in employment or production, about half reported that the maquiladoras had been more negatively affected by the U.S. economic downturn than had other businesses in Mexico. Among the 23 industry associations that indicated a decline in their memberships’ employment or production, mounting foreign competition in the U.S. market was the most frequently offered explanation for the decline of the maquiladoras over the past 2 years. Over one-half of the representatives of industry associations referred specifically to the role of China in the maquiladoras’ decline. One maquiladora spokesman, for example, suggested that China’s entrance into the World Trade Organization (WTO) has made that country a more attractive choice for foreign direct investment, while foreign investment in Mexico’s maquiladoras has decreased. Among the major suppliers of imports to the United States, Mexico ranked second and China third in 2002. As figure 11 illustrates, both Mexico and China experienced significant growth in exports to the United States from 1995 to 2002. However, between 2000 and 2002, U.S. imports from Mexico grew at a slower pace than those from China. As a result, the gap between Mexico and China narrowed in China’s favor. As appendix III details, Mexico recently lost market share in 47 out of 152 major U.S. import categories. At the same time, China gained U.S. market share in 35 of those 47 import categories, including toys, furniture, electrical household appliances, television and video equipment and parts, and apparel and textiles. Some of these industries represent significant sectors of maquiladora production. Recent International Trade Commission (ITC) staff research suggests that while Mexico does face increased competition from China in the U.S. market, some sectors are more threatened than others. According to the ITC staff research, a growing share of some textiles and apparel products sold in the United States are being produced in China rather than Mexico. In contrast, this staff research notes that within the machinery sector, the data did not indicate a shift in competitiveness away from Mexico towards China. Mixed results are apparent in the electronic products sector. Mexico lost U.S. market share to China in the telephone and telegraph equipment segment in both 2001 and 2002, and Mexico’s gain in the computer hardware segment in 2001 was more than offset by a sharp loss to China in 2002. The ITC staff research noted above concludes that China has competitive advantages over Mexico in terms of labor costs, electricity costs, and diversity of component suppliers. In this context, it is worth noting that wages along the U.S.-Mexico border, where the maquiladoras are concentrated, tend to be higher than in other areas of Mexico. More recent ITC staff research indicates that the cost of water for industrial uses (important in the textiles industry) and corporate income tax rates are lower in China. On the other hand, the ITC staff research suggests that Mexico’s comparative advantages include lower transportation costs, shorter transit time, and lower international communication costs. Mexico also provides greater protection for intellectual property, more transparency in regulation and administration, and a network of free-trade agreements with third countries. Several industry representatives noted that Mexico also faces increased competition from countries in Central America or the Caribbean. One industry spokesperson noted that the U.S. decision in May 2000 to grant NAFTA-parity access to Caribbean Basin Initiative (CBI) countries had eroded Mexico’s ability to compete in the U.S. apparel market, particularly because a number of Central American and Caribbean countries have lower labor costs than Mexico. According to a Mexican economic research group, manufacturing wages in Mexico are almost 67 percent higher than in the Dominican Republic and about 92 percent higher than in Honduras. The heightened global competition from China and CBI countries is part of a larger phenomenon in which the benefits enjoyed by maquiladoras and other Mexican producers have eroded as U.S. trade preferences or liberalization accorded to other countries have expanded. The recent experience of the Mexican textiles and apparel industry, one of the major maquiladora sectors, illustrates this point. In 1994, NAFTA gave Mexico preferential access for its textiles and apparel. Other countries’ exports to the United States and Canada generally did not receive similar advantages. U.S. imports of Mexican textile and apparel products grew rapidly, with Mexico’s share of total U.S. imports in this sector doubling from 7 percent in 1994 to 14 percent in 2000 (see fig. 12). Mexico surpassed both China and the Caribbean Basin countries to become the largest supplier to the U.S. market. However, under the Trade and Development Act of 2000, the United States allowed textile and apparel products from Caribbean Basin countries that met certain requirements to receive preferential access to the U.S. market. This legislation also stipulated that to benefit from the special treatment, CBI-based apparel operations must use U.S.-made inputs, and, according to a Mexican textile industry association, operations have shifted from using Mexican textiles. In addition, under the WTO Agreement on Textiles and Clothing, all quotas on textile and apparel products are being phased out by 2005. For some products quotas have already been removed. Despite the recent U.S. recession and a decline of total U.S. imports of textiles and apparel by 5 percent between 2000 and 2002, U.S. imports of textiles and apparel from China rose 12 percent, making China again the largest foreign supplier to the U.S. market. Figure 12 shows these changing patterns of U.S. imports in textiles and apparel from Mexico, China, and CBI countries. Many industry representatives whom we contacted also called attention to the role of the strengthening Mexican currency in eroding the maquiladoras’ competitiveness. Historically, growth periods in the maquiladoras have been associated with devaluations of the peso. For example, after the peso was devalued in 1984, there was a 3-year surge in U.S. automotive industry investments in maquiladora plants. Similarly, according to a study by the El Paso Branch of the Federal Bank of Dallas, the peso devaluation in December 1994 played a key role in spurring the expansion of Mexico’s maquiladoras during the second half of the past decade. However, beginning in the last quarter of 1998, the Mexican peso consistently appreciated against the dollar in real terms, a trend that continued while the maquiladoras experienced their greatest employment decline, from the end of 2000 to the beginning of 2002 (see app. II for the relative dependence of maquiladora employment on the real peso exchange rate). As the peso appreciated in real terms, maquiladora operating expenses increased. Moreover, this real appreciation of the Mexican peso took place as the currencies of some of Mexico’s East Asian competitors were depreciating against the dollar. For example, figure 13 compares the performance of the Chinese yuan to the Mexican peso, in real terms, between 1995 and 2002. Unlike the peso, the yuan has actually depreciated since early 1998. Among industry groups whose members had experienced losses in employment or production, about two-thirds of those we interviewed indicated that uncertainty resulting from Mexican government tax policies was a major factor in the maquiladoras’ decline. These groups noted that such uncertainty had caused some firms to withdraw from, or downsize, their operations in Mexico and had also discouraged new foreign direct investment in Mexico. In particular, industry representatives said that frequent changes to the fiscal regime had increased the tax burden and administrative costs to maquiladoras. They were also concerned that the frequent changes reduced the maquiladoras’ ability to develop long-term investment plans. In addition to the duty-free treatment on import of parts, components, and other inputs, maquiladora plants enjoyed, at least until the mid-1990s, a virtual freedom from taxation. Though legally subject to income taxes, in practice, the companies paid only a small assets tax, a flat minimum of 2 percent of the value of the maquiladora’s assets. Moreover, the maquiladoras were permitted to use the cost of wages to offset their tax on assets. This virtually eliminated taxes for some maquiladoras. According to experts, the twin benefits of duty-free import and minimal taxation were primary incentives for foreign firms to establish manufacturing operations in Mexico. The tax regime applicable to maquiladoras remained constant for almost 30 years but began to evolve rapidly in the 1990s. The most significant of these tax changes, the treatment of what are known as “permanent establishments” is frequently noted by industry groups and others as a cause of investor uncertainty about the industry. A permanent establishment typically is a branch of a company from one country that is doing business in another “host” country, and which may be taxed in that host country. According to U.S. Treasury officials, permanent establishment is a concept found in virtually all double taxation treaties. Mexico adopted the permanent establishment concept as part of its income tax law in 1981. According to U.S. Treasury officials and Mexican tax experts GAO consulted, Mexico essentially exempted maquiladoras from the tax that could be imposed on permanent establishments until 1998. However, starting in 1998, Mexico began seeking to treat the foreign parent companies of maquildoras as having permanent establishments in Mexico for tax purposes. By treating the maquiladoras as permanent establishments, the Mexican government could subject the foreign parent companies to taxation, potentially allowing Mexico to increase the revenues it collects from maquiladora operations. The right of the Mexican government to tax maquiladoras as permanent establishments was affirmed in the U.S.-Mexico tax treaty of 1992. However, U.S. companies with maquiladora operations in Mexico were concerned that Mexico’s application of permanent establishment to their maquiladora operations would subject them to double taxation. This could occur if Mexico imposed a broad definition of how permanent establishments could be taxed that the U.S. Treasury would not accept, because it would prevent the U.S. parent company from getting a full credit in the United States from the taxes actually paid in Mexico. Resolution of potential problems, such as double taxation, associated with the treatment of maquiladoras as permanent establishments has necessitated a series of additional bilateral agreements between the United States and Mexico. It took several years and several different iterations to finally resolve such practical problems, and this caused a prolonged period of uncertainty for maquiladoras. Other changes in Mexico’s tax regime have contributed to the climate of investor uncertainty. In 2002, for example, Mexico limited the ability of businesses, including maquiladoras, to take a tax credit on salaries. According to industry representatives, this provision could have significantly increased the tax burden on some maquiladoras. However, according to Mexican officials, this tax provision has subsequently been ruled unconstitutional. The phasing out of maquiladora benefits as part of NAFTA was also cited by industry associations as a major factor in the decrease in maquiladora production and employment. When NAFTA was signed in 1993, it envisioned fundamental changes to the maquiladora model. The most significant of these changes was embodied in Article 303 of NAFTA, which eliminated duty drawback (or refunds of duties) for inputs of non-NAFTA origin as of January 1, 2001, if the final products incorporating these inputs are to be subsequently exported to another NAFTA country. For various reasons, notwithstanding the 7-year grace period provided, the maquiladoras did not develop a network of domestic suppliers in Mexico. As a result, implementation of Article 303 has adversely affected the competitiveness of maquiladoras that rely on non-NAFTA suppliers for inputs and resulted in closure of some maquiladora firms. According to officials with the Office of the U.S. Trade Representative, some aspects of the Maquiladora program were not consistent with NAFTA’s trade objectives. For example, the duty drawback provisions of the Maquiladora program were in conflict with NAFTA’s rules of origin requirements. Under NAFTA’s rules of origin, goods traded among NAFTA partners are allowed duty-free status only when the goods comprise a minimum percentage of North American content. However, the Maquiladora program provided duty drawbacks for inputs imported to Mexico from any source, including non-NAFTA countries, undermining the duty-free benefits that North American products were to receive in Mexico as a result of NAFTA. Second, such drawback programs represented an advantage for exporters versus firms involved in production for the domestic market, since the latter would not receive an equivalent duty drawback. In negotiating NAFTA, the United States hoped to reverse this advantage, which led to the development in Mexico of an economic system with separate production tracks for exports and for goods destined for domestic consumption. In fact, U.S. officials explained that they envisioned the gradual phasing-out of maquiladoras with the implementation of NAFTA, as duty-free treatment would apply to all trade among NAFTA member countries. The rationale behind Article 303 was to encourage firms to develop North American suppliers for critical inputs by providing an incentive for maquiladoras to shift sourcing of components or inputs to North America, including Mexico. The development of a network of North American suppliers would mean that more value would be added during the production process in Mexico, the United States, and Canada.The elimination of duty drawback would necessitate significant changes in the sourcing of maquiladora inputs, particularly for maquiladora operations of some Japanese and other Asian companies that were heavily dependent on certain inputs from the Far East. The implementation of Article 303 was therefore scheduled for January 1, 2001, 7 years after NAFTA’s entry into force, to allow the maquiladoras to relocate their supply chain to North America. However, a network of Mexican domestic suppliers for the maquiladoras largely failed to materialize during this period. Maquiladora observers have suggested several explanations, principally the scarcity of credit in Mexico to support entrepreneurial activity and the lack of an entrepreneurial culture among Mexican businesses. Under NAFTA, Mexico could have chosen to counter the loss of duty drawback following implementation of Article 303 by reducing or eliminating its most favored nation duties on key inputs. U.S. officials note that Canada eliminated hundreds of its most favored nation duties before Article 303 took effect. Instead, in order to cushion the impact of NAFTA Article 303, Mexico instituted a measure known as the sectoral promotion program with targeted and reversible tariff reductions. Since Article 303 was implemented, maquiladoras that depend on inputs from outside North America have seen their competitiveness erode. Some maquiladoras have reported production cost increases of up to 20 percent due to the implementation of Article 303. Japanese, Korean, and Taiwanese companies involved in maquiladora production have been particularly affected by the implementation of Article 303 and have led the way in relocating from Mexico to other countries. Industry associations we contacted, representing maquiladoras in the Tijuana area, where Asian- owned maquiladoras are concentrated, as well as an association representing Japanese business in Mexico, attributed the departure of maquiladora firms from Mexico, at least in part, to the implementation of Article 303. Significant challenges continue to confront Mexico’s maquiladoras, although recent industry and government action and the prospect of future Mexican reforms may bolster prospects for maquiladoras’ recovery. The downturn during the past 2 years has accelerated ongoing industry evolution and has been a catalyst for several industry and government changes to improve the competitiveness of the sector. However, maquiladoras still face fundamental challenges. For the most part, meeting these challenges depends on further action by the government of Mexico, but some of the challenges are related to U.S. policies that are likely to put additional pressure on maquiladoras. The factors described in the previous section as having a role in the maquiladora’s recent decline still confront the industry. As a result, some Mexican government officials have stressed the need to move beyond the current “maquiladora model” to attract a new generation of more technologically advanced operations that would allow Mexico to remain competitive. Given the continuous evolution of maquiladora operations, Mexico’s maquiladora industry is now a complex sector with substantial diversity. One academic expert concludes that as firms become involved in more sophisticated, capital-intensive operations, they are less likely to close and move their plants because of cyclical downturns such as the one maquiladoras faced after 2000. Some Mexican maquiladoras are now recognized as having sophisticated production and management methods. According to industry experts, such maquiladoras are better positioned to weather the maquiladora downturn and deal with continuing challenges. Nevertheless, researchers point out that the transition to more advanced production practices is quite uneven. Many maquiladoras remain oriented toward lower-skill activities that involve few Mexican inputs besides labor. The downturn of the last several years has resulted in a shake-out involving some losers, notably among this type of operations. One positive aspect of the recent maquiladora downturn is that it has spurred some actions by industry and the government of Mexico to restore Mexican competitiveness. In the face of increased global competition, maquiladoras are seeking to capitalize on Mexico’s unique competitive advantages, particularly those associated with that country’s proximity to the United States and its growing network of free trade agreements. For example, noting the recent establishment of plants in Juarez by several computer manufacturing firms, one industry analyst explained that Mexico’s quick time-to-market location is essential for the success of both new products as well as repairs in the computer value chain. Similarly, a senior industry expert noted that the growth of automotive maquiladoras, in northern and central Mexico, underscores the competitive advantages resulting from the efficient combination of U.S. and Mexican inputs. According to this source, notwithstanding the arrival of new competitors, the Mexican automotive industry is poised to take advantage of the full opening of the regional North American automotive market that will occur in 2004. Mexico also stands to benefit as a direct and indirect automotive sector exporter to the United States and other countries with which Mexico has signed trade agreements. Some industry sources reported unexpected benefits associated with the recent losses experienced by the maquiladoras. According to industry representatives in Juarez, the rapid pace of maquiladora growth had put intense pressure on local infrastructure during the late 1990s. Local authorities simply could not keep up with the demand for health, education, and other services associated with the dramatic increases in population growth that accompanied the expansion of maquiladoras. They viewed the slowdown of the past 2 years as a welcome respite. In addition, a number of industry representatives noted that the downturn has resulted in significant drops in employee turnover and in the associated hiring and training costs. Prior to the downturn, they said, maquiladoras in some border cities reported very high employee turnover rates because the rapid growth in maquiladora establishments allowed workers to continuously find new jobs in other plants. One expert suggested that such turnover in some border cities had reached 80 percent at the height of the maquiladora boom. Consequently, employers had significant hiring and training costs and were forced to keep some positions overstaffed to compensate for the turnover. This could have more fundamental implications for the ability of some maquiladoras to build a highly skilled workforce, since it is not feasible to invest in significant training for workers whose expected tenure with a firm is only a few months. Industry sources told us that the turnover rates had dropped sharply since the downturn, and some maquiladoras report that this has had a positive effect on administrative costs as well as the cost of training new employees. Finally, industry sources stressed the importance of Mexican government action for the development of a favorable business environment that can respond quickly to changing market forces faced by maquiladoras. In response to industry pressure, the Mexican government recently undertook several measures in support of the maquiladoras, primarily aimed at easing irritants. On May 12, 2003, the Mexican government issued a decree modifying certain aspects of the Maquiladora program. The reforms are aimed primarily at simplifying regulations that apply to companies that provide support and logistic services to maquiladoras, and to enhance legal certainty for Mexican exporters, including maquiladoras. An important provision of the new decree will be streamlined customs requirements for companies with several subsidiaries operating under the Maquiladora program. This would allow such companies greater flexibility in the transfer of finished or semi-finished products from one subsidiary to another. The decree also contains provisions that would reduce administrative costs and procedures. For example, a maquiladoras will only have to submit a single report on an annual basis, which can be submitted electronically. Based on initial industry reaction, it is unclear whether the new decree will satisfy critics seeking greater legal certainty and improved incentives for maquiladoras. In response to the recent crisis in the maquiladora industry, Mexico has greatly expanded its sectoral promotion program (PROSEC). First launched in November 2000, PROSEC was intended to reduce the impact of NAFTA Article 303 (which became effective January 1, 2001) by providing that duty rates on imported inputs from non-NAFTA suppliers of either 0 or 5 percent. Initially, maquiladora industry representatives complained that PROSEC was too restrictive because it applied to very few imported inputs. However, throughout 2001 and 2002, the list of products eligible for tariff reduction under PROSEC was progressively expanded to include more than 16,000 products from 22 industry sectors, including electronics and textiles and apparel. In September 2002, the Mexican government provided additional support to the maquiladoras through a program called the Information Technology Agreement (ITA) Plus. ITA Plus immediately removed tariffs from inputs, parts, and components used in the electronic and high-technology sectors, regardless of the country of origin. It also provided for the gradual removal of tariffs from semifinished and finished products in those sectors. According to Mexican officials, in addition to lowering tariffs on electronic and high technology inputs, ITA Plus may help to reduce the administrative burden on the maquiladoras. The Presidential Council for Competitiveness was created in July 2002 to promote investment, increase employment, and accelerate Mexico’s economic growth. A cooperative effort between government and business that is chaired by the Minister of Economy, the council’s activities include the creation of fiscal stimulus packages for export factories in twelve different sectors of the economy, including maquiladoras as part of the in-bond industry. One objective of the council is the development of manufacturing clusters, which will deepen the supply chain in Mexico. In support of the work of the council, the Secretariat of the Economy has agreed to fund, through the National Council of the Maquiladora Export Industry (CNIME), a comprehensive study on the maquiladora industry. Recent agreements between the United States and Mexico have largely resolved the threat of double taxation of U.S. firms that was raised by Mexico’s efforts to define maquiladora parent companies as permanent establishments, discussed above. As a result of a Second Additional Protocol to the U.S.-Mexico tax treaty, signed in 2002, the United States will be able to provide a foreign tax credit to U.S. firms that have paid income taxes to Mexico with respect to their maquiladora operations. Mexico has also independently announced that it will make no changes to existing agreements on permanent establishment until 2007. These steps by the Mexican government seem to reflect wider recognition by officials in Mexico City of the maquiladoras’ importance to Mexico. Industry representatives complained that the Mexican government was slow to respond to the challenges faced by the maquiladoras. According to these representatives, the Mexican government initially took “a wait and see” approach to the maquiladoras decline, in the belief that labor-intensive maquiladora operations leaving Mexico would be readily replaced by better paid, more profitable industries. As job losses continued in the first three months of 2002, maquiladora representatives pressured the government to implement remedial measures. Notwithstanding the initiatives discussed above, government, industry and academic sources suggest that meeting remaining challenges to the future success of the maquiladoras will, in some cases, require fundamental Mexican reforms in several areas, including energy, infrastructure and labor. However, the initiatives Mexico is pursuing in these areas may be difficult to bring about. Government officials and industry representatives stated that there is an urgent need for energy reform in Mexico. Energy sector reform is important to the maquiladora industries because they require reliable and competitive energy prices to compete with suppliers in other nations. The ITC, for example, has noted that electricity and industrial water costs are two areas in which Mexico is less competitive than China. The Fox administration maintains that without energy reform, Mexico may experience a power crisis as early as 2004, and it introduced an energy reform bill in August 2002. The legislation stalled in the Mexican Congress, however, because some legislators opposed aspects of reform dealing with privatization that would entail amending the Mexican constitution. Maquiladora and other Mexican industry associations cite improving Mexico’s infrastructure as critical to advancing Mexico’s competitiveness. According to a report by the Mexican Government Commission for Border Affairs, the six Mexican states that border the United States share the advantage of an adequate basic infrastructure, with a road network variously described as good, fluid, or satisfactory. However, even in this region, about 32 percent of the Mexican federal highways are in poor condition. Another study found that critical problems persist in Mexico’s road infrastructure, notably, limited public or private investment in highways in recent years. Some maquiladora representatives we spoke with cited infrastructure shortcomings as a disincentive for potential investors in maquiladoras. According to Mexican labor officials, as part of its platform to modernize Mexico and improve its international competitiveness, the government has sought to reform the labor code. Maquiladora representatives stated that improvements in labor productivity depend on reform of labor regulations to provide increased flexibility to employers. The Fox administration has responded to this need for labor reform by developing a labor reform package that represents a compromise between labor groups, business, and government. Key elements of the reform package include the use of secret ballots in union elections, the allowance of more than one union to represent worker interests, expanded employer flexibility to hire workers on a trial basis, and a strengthened binding arbitration process. This reform package was not passed by the Mexican Congress before congressional elections were held in July 2003, in part because it lacked consensus support within the Mexican labor movement. A consultant for the maquiladora industry cites worsening shortages of trained labor in most cities where maquiladoras are concentrated as among the challenges confronting the industry that the government must address. One academic study of the maquiladoras’ viability found that to develop more technology-intensive operations, Mexico needs a large number of highly educated workers. However, according to the Commission on Border Affairs, the data indicate a low level of educational attainment in the economically active population along the border, with over one-third of adults having completed only primary education or less. The search for better educated workers has led a number of companies to establish assembly plants in cities further from the border, with better reputations for good public secondary education and trade schools. Action by Mexico is key to the maquiladoras’ future viability, particularly since U.S. approaches to trade liberalization and homeland security may put additional pressure on maquiladora operations. Industry representatives noted that present U.S. policies in these areas could undermine current benefits and reduce future competitiveness. Regarding U.S. trade policy, the future development of the maquiladora industry in Mexico may also be affected by further changes in competitors’ access to the U.S. market. The United States is engaged in trade negotiations in several venues, including the Doha Round among the 146 members of the WTO, the Free Trade Agreement of the Americas (FTAA) involving 34 nations of the Western Hemisphere, and the U.S.-Central America Free Trade Agreement. These negotiations may reduce barriers to non-NAFTA countries’ products to levels similar to those enjoyed by NAFTA participants, Mexico, and Canada. For example, in the WTO, the United States has proposed to eliminate all industrial tariffs by 2015, and in the FTAA, the United States has proposed to phase out textile and apparel tariffs within 5 years after the agreement is implemented, if its hemispheric partners reciprocate. As we concluded in a 2001 report, expansion of trade benefits to wider numbers of competitors, while benefiting U.S. consumers and other trade partners, dilutes the benefits of prior trade preferences. Some business association representatives that we interviewed expressed concern that future U.S. trade agreements would erode benefits provided to Mexican suppliers in the U.S. market under NAFTA. Representatives for one industry association expressed hope that the United States would use negotiations such as the FTAA to strengthen regional competitiveness relative to global competitors such as China. Maquiladora industry experts also expressed concern that U.S. security measures instituted at ports of entry after September 11, 2001, could erode the Mexican maquiladora industry’s advantage of proximity to U.S. markets. Of particular concern are U.S. government measures that require advance notice for transborder shipments of goods and additional information on the entry into and departure from the United States of every foreign citizen. Companies that use just-in-time operations, an important element in some maquiladora operations, could be especially hurt by requirements related to advance notice for shipments, because they could not ship goods immediately on receiving an order. Firms that rely on regular and efficient movement of workers and service operations across the border could be especially affected by the information requirements for Mexican workers who cross the border frequently. For example, at one major border crossing in downtown El Paso, less than a mile from Interstate 10, significant congestion would result if U.S. authorities had to screen traffic bound for Mexico to obtain information from every departing alien. Successful implementation of these new requirements will require close coordination of U.S. and Mexican national and local officials as well as adaptation of the private industry to the new requirements. Both the United States and Mexico have an interest in the future of maquiladoras given their central role in U.S.-Mexico trade and the border economy. Partly driven by maquiladoras, Mexico has assumed a more prominent place among U.S. trade partners in recent years, becoming the United States’ second leading trading partner, after Canada. Moreover, production and employment linkages have developed between maquiladoras and producers throughout the United States and are based on the high volume of U.S.-generated components used in maquiladora operations. Businesses in communities on the U.S. side of the border provide services to the maquiladoras, such as customs brokerage and commercial transportation. Retail sales to Mexican citizens in U.S. border communities contribute substantially to U.S. business and tax receipts. The decline in Mexico’s maquiladora production and employment has already taken its toll on cross-border trade and trade-related employment in certain U.S. border communities. Maquiladoras have become an even more important element of the Mexican economy, particularly over the decade of the 1990s, when maquiladora growth propelled Mexico into the ranks of the world’s leading exporters and generated 900,000 new jobs. Employment created by maquiladoras on the Mexican side of the border has become a mainstay of economic activity in that country. The decline over the past 2 years has served as a catalyst for further transformation of the industry, as well as Mexican industry and government efforts to restore competitiveness. The challenges still confronting maquiladoras and the pressure from U.S. trade and homeland security policies lend urgency to Mexican efforts to create an environment where cross-border links between U.S. and Mexican firms and communities can continue to prosper. We provided a draft of this report for comment to five U.S. government agencies: Department of State, the Office of the U.S. Trade Representative, U.S. Customs and Border Protection (formerly U.S. Customs), Department of the Treasury, and the U.S. International Trade Commission. We also asked for comments from three Mexican government agencies: the Ministry of the Economy (Secretaría de Economía) the Ministry of the Treasury (Secretaría de Hacienda), and the National Institute of Statistics, Geography and Information Technology (Instituto Nacional de Estadística, Geografía e Informática). We received informal written comments from all of these U.S. and Mexican government agencies, except Mexico’s Ministry of Economy. In addition, the Department of State provided formal written comments, which are reprinted in appendix VII. In general, all of the agency comments were technical or editorial in nature, which we incorporated as appropriate in the text of our report. In addition, U.S. ITC staff had more extensive comments related to our decision to exclude firms operation under the so-called PITEX program from the general scope our work, noting that PITEX firms are important in certain sectors, such as autos, and account for a substantial share of Mexico’s total exports to the United States. While we recognize that firms operating under PITEX are an important element in U.S-Mexico production-sharing operations, as are maquiladoras, we limited our report to the Maquiladora program for several reasons. First, our requesters specifically expressed an interest in the maquiladora industry and the effects of the recent decline of the maquiladoras along the U.S.-Mexico border. Unlike maquiladoras, which are still concentrated along the border, firms operating under the PITEX program are spread throughout Mexico. Secondly, the data the government of Mexico collects on maquiladoras are significantly more extensive and are not altogether comparable to the data collected on PITEX firms. Thus, there would have been problems in comparing the two types of operations. Finally, the data available on PITEX firms suggest that they have experienced trends in recent years not unlike those observed among maquiladoras. Including data on PITEX firms would not have significantly altered our message. We are sending copies of this report to other interested members of Congress, the Secretary of State, the Secretary of the Treasury, the U.S. Trade Representative, the Secretary of the Department of Homeland Security, the Commissioner of Customs, and the Chairman of the U.S. International Trade Commission. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you, or your staff, have any questions about this report, please contact me on (202) 512-4347. Other GAO contacts and staff acknowledgments are listed in appendix VIII. This appendix examines U.S. employment changes along the U.S.-Mexico border and explores whether employment in the border areas of the United States has been disproportionately affected by the recent slowdown in U.S. economic activity and the associated decline in cross-border trade between the United States and Mexico. For the purpose of this analysis, the U.S. border with Mexico is defined as the metropolitan statistical areas (MSA) closest to the U.S.-Mexico border, comprising the MSAs for San Diego, California; Tucson, Arizona; Las Cruces, New Mexico; and El Paso, Brownsville, Laredo, and McAllen, Texas. U.S. employment in the border area increased by approximately 591,000 jobs between 1990 and 2002, largely owing to the overall national trend in employment growth. For example, according to our analysis, 60 percent of the jobs gained were due to the growth of the national economy. However, 230,000 of those jobs could be linked to local factors, that is, factors associated with the area’s attractiveness for employment creation. Most of the new jobs were added from 1995 to 2002. However, the ways in which each border subregion benefited from the employment growth vary considerably. U.S. employment in the U.S.-Mexico border area grew by 35 percent between 1990 and 2002, gaining 591,000 jobs. The services sector was the largest employer and accounted for approximately 48 percent of the job growth (282,000 jobs) during this period. Other sectors with notable employment growth were retail trade (93,000 jobs); finance, insurance, and real estate (20,000 jobs); transportation and public utilities (31,000 jobs); and government (128,000 jobs). As figure 14 shows, total nonfarm employment growth rates in the border region were generally similar to those observed for the United States from 1993 to 1995. However, employment growth in the border MSAs exceeded employment growth at the national level after 1995. Furthermore, growth of nonfarm employment in the border area continued even after the U.S. economic slowdown began in 2001. Laredo and McAllen grew fastest, followed by Brownsville, Tucson, Las Cruces, and San Diego. Some border industries experienced a decline in employment in 2001 and 2002, particularly manufacturing (down 6 percent), transportation and public utilities (down 4 percent), and wholesale trade (down 3 percent) (see table1). As table 1 shows, declines in manufacturing were relatively more severe in Texas (down an average of 18 percent), while declines in wholesale trade and transportation and public utilities were more pronounced in Arizona (down 9 and 11 percent, respectively). A closer look at Texas further shows that the manufacturing, transportation, and public utilities sectors declined after 2000 in all four Texas border MSAs. To analyze the factors at the national and local levels that contributed to the employment trends described above, we employed a methodology known as shift-share analysis that decomposes employment growth (or decline) in a region over a given time period into three components: the national growth effect, the industry-mix effect, and the local (competitive) effect. 1. National growth effect. The national growth effect is that part of a regional change in total employment ascribed to the national growth rate of total employment. It assumes that the region’s employment growth matches the overall national rate. The national growth component is the change that would be expected given that the local area is part of a changing national economy. Our analysis shows that from 1990 through 2002, the border counties gained 339,100 jobs due to economic trends at the national level (see table 2). However, the actual gain occurred prior to the year 2000 as an estimated 15,800 jobs were lost due to the national trend in 2001 and 2002. The border area’s biggest employer, the service sector, had the highest national growth component (97,300 jobs), followed by the government (71,200 jobs), and retail trade sectors (65,900 jobs). Our analysis incorporating possible differences among the border subregions shows that from 1990 through 2002, nonfarm employment growth in San Diego accounted for nearly 50 percent of the increase in employment due to employment expansion at the national level. 2. Industry-mix effect. An industry-mix effect is the amount of change that a region would have experienced had each of its industries grown at their industry national rates, less the national growth effect. This component identifies the share of local job growth that can be attributed to the region’s mix of industries and seeks to address whether employment growth in an area outpaced the nation owing to a concentration of faster growing industries. For the period 1990 to 2002, the border area gained 21,200 jobs owing to a concentration of faster growing sectors there than in the nation as a whole. This gain in total employment was achieved primarily with employment gains in the services (114,400 jobs) and construction and mining (4,200 jobs), and it occurred despite employment losses totaling 95,200 jobs from other sectors, notably manufacturing (69,800 jobs), government (10,500 jobs), and wholesale trade (8,500 jobs). Moreover, 47 percent of the employment growth due to the industry-mix effect occurred between 2001 and 2002. In subregions, the industrial mix component for all sectors decreased total nonfarm employment during 1990–2002 only in El Paso, Texas. 3. Local (competitive) effect. A local (competitive) effect seeks to isolate the extent to which factors unique to the local area have caused growth or decline in regional employment. The effect is defined as the employment change that remains after the national and industrial mix components have been accounted for, and it is therefore the purely regional aspect of the region’s employment growth. If a region’s competitive share is positive, the region is considered to have local advantage in promoting employment growth. This advantage could result from such factors as local businesses having superior technology, management, location, market access or the local labor force’s having higher productivity, lower wages, or both. A negative competitive share component could be caused by local shortcomings in any or all of these aspects. Local conditions appear to have been a significant factor in the increase in U.S. border employment, particularly since 1995. Across all sectors, the competitive share component—employment growth attributable to local conditions—totals to a net addition of 230,000 jobs. This indicates that the border area was competitive in securing additional employment from 1990 through 2002. As figure 15 shows, nearly all of these employment gains were realized in the years since 1995. Furthermore, 43 percent of border area employment gains owing to local factors were achieved between 2001 and 2002. The top three sectors in competitive share gains in employment from 1990 through 2002 were services (70,600 jobs), government (67,200 jobs), and manufacturing (37,500 jobs). However, for the 2000–2002 period, the transportation and public utilities sector showed a reduction in jobs (approximately 300 jobs) owing to local factors. In addition, factors unique to the local area caused employment declines in certain subregions and sectors during 1990–2002, notably, in Laredo, Texas, in construction and mining; El Paso, Texas, in wholesale trade and services; Brownsville, Texas, in finance, insurance, and real estate; Tucson, Arizona, in transportation and public utilities; and Las Cruces, New Mexico, in government employment. Furthermore, subregions in Texas generally lost their local edge in securing manufacturing employment from 1990 through 2002 and this loss was more pronounced in 2001 and 2002. Similarly, owing to local factors from 2001 to 2002, Tucson, Las Cruces, and El Paso lost jobs in transportation and public utilities; Tucson, El Paso, and Laredo lost employment in wholesale trade; and El Paso lost service employment; and Brownsville and Las Cruces lost employment in the finance, real estate, and insurance sector. Our statistical analysis shows that the key factors cited in our semistructured interviews as responsible for the maquiladora downturn— namely, the U.S. general economic slowdown, particularly in U.S. manufacturing, and the real peso-dollar exchange rate—are significant determinants of maquiladora employment. We found a strong relationship between maquiladora employment and U.S. economic conditions. This relationship is stronger than that between maquiladora employment and the real peso-dollar exchange rate, but considerably weaker than that between maquiladora employment and changes in U.S. manufacturing shipments. We also found that maquiladora sectors are more sensitive to changes in U.S. manufacturing shipments than to broader U.S. economic conditions. A major reason for the rapid growth of the maquiladora industry has been its direct tie to the U.S. economy, particularly to U.S. manufacturing. As a result, the maquiladoras are partly independent of Mexico’s internal economic trends. This independence from the Mexican economy has made the industry a stabilizing force when the Mexican economy heads into recession. However, the direct tie to U.S. manufacturing also makes the industry predisposed to U.S. business cycles. As mentioned previously in the main body of this report, the number of maquiladoras and the employment they generate has declined from a peak reached in 2000. This decline has been attributed to several factors. The most important of these factors is the downturn in the U.S. economy. An additional factor that has been alleged to contribute to the apparent decline has been cost increases due to increases in the inflation adjusted value of the peso relative to the dollar, i.e., the real exchange rate of the peso. This appendix investigates the relationship between maquiladora employment in Mexico and U.S. economic performance and the real peso exchange rate. To determine the link between maquiladora employment and U.S. economic conditions, we assembled data on maquiladora employment in total and by main sectors as well as data on U.S. GDP on a quarterly basis from January 1980 to December 2002. We then converted all of these data to their natural logarithms and performed a regression of maquiladora employment on the real peso-dollar exchange rate and the real U.S. GDP. The results of the regression are presented in table 3. = α+β lnY + γlnΩ + ε Where X is Maquiladora employment, Y is U.S. Gross Domestic Product, Ω is the exchange rate of the dollar relative to the peso, and α, β, γ are positive constants to be estimated. J represents the maquiladora sectors and ln indicates natural logarithms. As table 3 shows, maquiladora employment is very sensitive to U.S. economic growth and the exchange rate. Our results show that a 1 percent rise (or fall) in U.S. GDP increases (decreases) total maquiladora employment by 3.68 percent, while a 1 percent rise in the real peso exchange rate decreases maquiladora employment by 0.17 percent. Maquiladora employment is consequently more responsive to changes in the U.S. economy than to changes in the real exchange rate of the peso. In addition, maquiladora employment in the automotive sector is most responsive to change in U.S. GDP, while maquiladora employment in electrical apparatus and machinery is least responsive. The automotive sector is also the most responsive to real exchange rate variations, while the electrical materials sector is least responsive. To investigate the stability of our estimates, we divided our sample into three separate time periods: 1980 to1985, 1986 to1993, and 1994 to 2002. Respectively, these three periods correspond roughly to the periods before and after the implementation of Mexican economic policy reform and after the implementation of NAFTA. Our analysis of the effect of U.S. GDP and the real peso-dollar exchange rate on total maquiladora employment during these three periods is shown in table 4. As the table shows, the responsiveness of maquiladora employment to U.S. economic conditions and the real peso exchange rate is fairly consistent with our results in table 3. However, the strongest maquiladora employment responsiveness to U.S. GDP growth occurred in the pre-NAFTA reform period (1986 to 1993). The post-NAFTA period (1994 to 2002) has a lower response coefficient for GDP and a higher response coefficient for exchange rates. It should be noted that the peso depreciated considerably in December 1994, after the onset of the “peso crisis.” We also looked into whether the U.S. manufacturing sector has a unique effect that cannot be captured by overall U.S. GDP. To do so, we obtained data on U.S. manufacturing shipments and performed a set of regressions similar to those we performed using GDP. The results of our analysis appear in table 5. As can be seen from table 5, a 1 percent change in U.S. manufacturing shipments induces an employment growth in maquiladoras of approximately 6.7 percent. Overall, the table shows, the maquiladora employment’s response to changes in U.S. manufacturing shipments is larger than its response to changes in U.S. GDP. We also found that certain maquiladora sectors, such as textile products, furniture and transportation equipment, are particularly sensitive to changes in U.S. manufacturing shipments. Although U.S. imports from Mexico ($130.8 billion) exceeded those from China ($109.2 billion) in 2001, these figures represented a decline of 3.2 percent for Mexico and an increase of 1.9 percent for China. In 2002, both countries experienced growth, but U.S. imports from China grew faster than U.S. imports from Mexico. This development, coming at a time of decreased maquiladora employment and increased plant closings, has led to speculation that Mexico is losing ground because of China’s production cost advantages. To highlight the competition between Mexico and China, we selected U.S. imports items from Mexico in 1995 and 2002, with a value of more than $100 million in 2002. We also obtained information on U.S. imports from China that matched the categories of the imports from Mexico. We then selected U.S. imports for which the share from Mexico had declined between 1995 and 2002 and matched them with imports for which the share from China had increased between 1995 and 2002. In 2002, the United States imported from Mexico 152 categories of items valued at more than $100 million each. The total value of these items was $123.1 billion, while the total value of the same categories of items from China was $88.2 billion. From 1995 to 2002, the share of U.S. imports from Mexico decreased for 47 of the 152 categories. For these 47 categories, in 2002, the total value of imports from Mexico was $25.5 billion and the value of imports from China was $23.4 billion. China’s share of U.S. imports increased for 35 of the 47 categories. The total value of these 35 items was $20 billion for Mexico and $23 billion for China. Table 6 shows the top 25 U.S. import categories in which imports from China increased, while imports from Mexico declined between 1995 and 2002. As the table shows, Mexico and China appear to be in direct competition for several import categories. Although a direct causal link is difficult to establish, China seems to have gained U.S. market shares at the same time that Mexico has lost them in some import categories, such as toys, furniture, electrical household appliances, television and video equipment and parts, and apparel and textiles. Maquiladoras are concentrated in the categories where China appears to have gained U.S. market shares. Trade with Mexico through U.S.-Mexico border crossings dropped in 2001 and remained flat in 2002. Whereas, total trade through the 4 major land border ports fell by 5 percent in 2001, U.S. exports to Mexico through these ports fell by 10 percent. The port of Nogales, Arizona, experienced the sharpest decrease in trade, with total trade declining by 9 percent in 2001 and 13 percent in 2002. Table 7 provides information on U.S. imports, exports, and total trade with Mexico by border crossing. The four border crossings examined—Laredo, El Paso, San Diego, and Nogales—are Customs districts that represent 33 individual ports of entry along the U.S.- Mexico border. After growing rapidly throughout the 1990s, Mexican national maquiladora employment peaked in October 2000 and declined sharply through March 2002. However, the rise and decline in maquiladora employment varied by state and city. As table 8 shows, the city of Tijuana experienced both the greatest percentage increase in maquiladora employment (233 percent from 1990 through October 2000) and the greatest decline (30 percent through December 2002). For each state or city, table 8 shows the number of jobs in 1990, followed by the number of jobs at the peak of employment (usually around October 2000) and at the lowest point, or trough, following the peak. The table also includes the changes in employment in absolute and percentage terms. The rise and decline of maquiladora employment also varied by industry. Table 9 shows employment changes for three key industries—electronics, autos and parts, and textiles and apparel—along with details on the rise, peak, and trough for the top five border region cities in terms of maquiladora employment. Our work focused on employment and production trends on the U.S.- Mexico border and recent trends in the maquiladora industry. We also analyzed data on overall U.S.-Mexico trade and compared trends along the border with developments in the broader U.S. and Mexican economies. To complete our objectives, we conducted interviews with government officials in the U.S. and Mexico, as well as semistructured interviews with 29 industry associations. Between November 2002 and February 2003, we conducted site visits in three areas of the border with a considerable maquiladora presence: McAllen, Texas–Reynosa, Tamaulipas; El Paso, Texas–Juarez, Chihuahua; and San Diego, California–Tijuana, Baja California. Our selection criteria consisted of two characteristics integral to the maquiladora industry: (1) the number of maquiladora employees and (2) the number of maquiladora plants. In addition to conducting site visits in selected border areas, we met with U.S. officials and traveled to Mexico City to meet with Mexican government officials. In the United States, we met with officials from the Department of State, Office of the U.S. Trade Representative, International Trade Commission, Environmental Protection Agency, Immigration and Naturalization Service, Department of Labor, Department of Transportation, Department of the Treasury, and U.S. Customs. In Mexico, we met with officials from the Ministry of Economy; Ministry of Labor; National Institute of Statistics, Geography and Information Technology; Ministry of Treasury; Ministry of Government; and Ministry of Environment. We obtained, reviewed, and analyzed data from maquiladora industry experts, nongovernmental organizations, and Mexican and U.S. government agencies. We also met with academics at educational institutions in Mexico and the United States, including San Diego State University; the University of California, Los Angeles; University of California, San Diego; University of Texas at El Paso; Colegio de la Frontera, Tijuana; Universidad Nacional Autónoma de Mexico; and Universidad Autónoma Metropolitana de Xochimilco. In addition, we met with numerous representatives of industry and nongovernmental organizations as well as other maquiladora experts. To understand how communities along the U.S.-Mexico border are integrated and the role that maquiladoras play in U.S.-Mexico interdependence (objective 1), we interviewed experts on the maquiladora industry, academics, and representatives of nongovernmental organizations. We reviewed extensive documentation and academic research provided by these sources, analyzing economic, social, and political linkages between border communities and the influence of the maquiladora industry in the border region. We identified similarities and differences between border communities with regard to social and economic integration. To review the status and trends in trade, employment, and output (objective 2), we obtained original official data on employment and trade from both U.S. and Mexican government agencies. We analyzed the data to identify trends in employment and production in the U.S.-Mexico border area. We compared our analysis of trends along the border with developments in the broader U.S. and Mexican economies. For example, for the United States, we conducted a shift-share analysis that decomposes employment growth (or decline) in a region over a given time period into three components: the national growth effect, the industry mix effect, and the local (competitive) effect. To assess the quality and reliability of the data, we conducted in-person meetings with government officials of the National Institute of Statistics, Geography and Information Technology in Mexico City to discuss the methodology for collecting the data and any known limitations or biases. For instance, statistics on maquiladora employment and production are affected when companies leave the program. Although establishments and employees are no longer considered part of the maquiladora sector and statistics correctly show a decline in maquiladora employment, the firms and employees may still remain in operation outside of the program. We also analyzed the data sources for internal consistency, as well as external consistency with other sources of information, such as our structured interviews. Although both U.S. and Mexican statistics have some limitations, we consider the data sufficiently reliable to present general trends and magnitudes in production, employment and trade. To identify the factors that have affected employment and production in the maquiladora industry (objective 3), we analyzed economic data and conducted semistructured interviews. Specifically, to determine the link between maquiladora employment and U.S. economic conditions, we assembled data on maquiladora employment in total and by main sectors as well as quarterly U.S. GDP data from January 1980 to December 2002. We then converted all of these data to their natural logarithms and performed a regression of maquiladora employment on U.S. real GDP, U.S. manufacturing shipments and the real peso-dollar exchange rate. The semistructured interviews were conducted in person and by telephone with 29 representatives of business associations, consisting of organizations representing principal industrial sectors involved in maquiladora operations, and maquiladora associations at the local and national level. Of these 29 organizations, 23 reported their members experienced a decline in employment and/or production. We asked these 23 organizations to discuss the major reasons for the maquiladoras’ recent decline. We relied on business associations to identify the factors affecting employment and production in the maquiladora industry, because of the direct experiences of their membership with plant closures, changes in employment levels, and other company changes. We also relied on associations to comment generally on issues facing the industry, such as increased competition, and for information on overall industry trends. In selecting potential interview participants from maquiladora and other business associations, to ensure representation throughout the industry, we considered three criteria: geographic location, industry sector, and country of origin or region of representation. Of the 29 associations interviewed, 17 were maquiladora associations and 12 were industry-specific associations. The maquiladora associations were primarily identified through the membership list for Mexico's National Council of the Maquiladora Export Industry (Consejo Nacional de la Industria Maquiladora de Exportación -- CNIME) that has a membership including 22 maquiladora associations located across Mexico. We contacted all 22 members and the national association, and we completed interviews with the national association and 14 local member associations. We completed additional interviews with two maquiladora associations that were not members of CNIME, but were included to broaden representation of country of origin/region of representation (i.e., Japan and the United States). Of the 12 industry-specific associations, we sought interviews with associations representing major industrial sectors, specifically targeting the electronics, automotive, and apparel sectors. Of the 29 associations, Mexico, the United States, and Japan were the country of origins/regions of representation included. We developed 14 questions for the semistructured interview guide, based on previous research. Six questions were closed ended, and eight were open ended. Participants’ responses to the open-ended items were content- analyzed by two trained coders, and intercoder reliability values were computed. Reliability values ranged from 58 percent to 100 percent. The coding category scheme was modified until 100 percent agreement was reached between the two coders. The results will not be generalizable outside our sample; however, we believe we have included associations in a way that is as balanced and inclusive as possible within the number of interviews we were able to conduct. To identify the implications of recent developments in the maquiladora industry for the border region and U.S.-Mexico trade (objective 4), we analyzed documents and interviews citing factors that might influence the recovery of maquiladora production. We also analyzed the debate about the viability of the industry and some initiatives to identify and address its recovery. The information on foreign laws in this report does not reflect our independent legal analysis, but is based on interviews and secondary sources. We performed our work from July 2002 through July 2003 in accordance with generally accepted government auditing standards. In addition to those listed above, Joel Aldape, Bronwyn Bruton, Gezahegne Bekele, Francisco Enriquez, Reid Lowe, Alison Martin, and Timothy Wedding made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Mexico's maquiladoras have evolved into the largest component of U.S.-Mexico trade. Maquiladoras import raw materials and components for processing or assembly by Mexican labor and reexport the resulting products, primarily to the United States. Most maquiladoras are U.S. owned, and maquiladoras import most of their components from U.S. suppliers. Maquiladoras have also been an engine of growth for the U.S.-Mexico border. However, the recent decline of maquiladora operations has raised concerns about the impact on U.S. suppliers and on the economy of border communities. Because of these concerns, GAO was asked to analyze (1) changes in maquiladora employment and production, (2) factors related to the maquiladoras' decline, and (3) implications of recent developments for maquiladoras' viability. After growing rapidly during the 1990s, Mexican maquiladoras experienced a sharp decline after October 2000. By early 2002, employment in the maquiladora sector had contracted by 21 percent and production had contracted by about 30 percent. The decline was particularly severe for certain industries, such as electronics, and certain Mexican cities, such as Tijuana. The downturn was felt on the U.S. side of the border as well, as U.S. exports through U.S.-Mexico land border ports fell and U.S. employment in manufacturing and certain other trade related sectors declined. The cyclical downturn in the U.S. economy has been a principal factor in the decrease in maquiladora production and employment since 2000. Other factors include increased global competition, particularly from China, Central America, and the Caribbean; appreciation of the peso; changes in Mexico's tax regime for maquiladoras; and the loss of certain tariff benefits as a result of the North American Free Trade Agreement. Maquiladoras face a challenging business environment, and recent difficulties have raised questions about their future viability. Maquiladoras involved in modern, complex manufacturing appear poised to meet the industry's challenges. Still, experts agree that additional fundamental reforms by Mexico are necessary to restore maquiladoras' competitiveness. U.S. trade and homeland security policies present further challenges for maquiladoras.
To help ensure the safety of the more than one million people who travel on thousands of flights throughout the United States each day, FAA inspects and certifies the aviation community’s compliance with FAA regulations. To better focus its limited inspection resources, FAA needs quick access to meaningful information about airlines, aircraft, pilots, and more. However, FAA currently does not have this capability. To address this limitation, FAA is acquiring the Safety Performance Analysis System (SPAS), an automated decision support system to aid FAA in targeting its inspection and certification resources on those areas that pose the greatest aviation safety risks. The Federal Aviation Act of 1958, as amended, requires FAA to promote the highest degree of aviation safety and establishes the safety of air passengers as a joint responsibility of airlines, aircraft manufacturers, and FAA. The airlines are responsible for operating their aircraft safely, aircraft manufacturers are responsible for designing and building aircraft that meet FAA regulations, and FAA is responsible for, among other things, (1) certifying that an airline is ready to operate and (2) conducting periodic inspections to ensure continued compliance with safety regulations. FAA is also responsible for certifying that aircraft produced in the United States or imported by domestic companies and individuals meet minimum safety standards before the aircraft can be operated. To carry out its inspection responsibilities, FAA employs 2,300 inspectors located in 91 Flight Standards District Offices (FSDO), International Field Offices, and Certification Management Offices throughout the United States. These inspectors oversee more than 17,900 commercial aircraft, 4,800 repair stations, 401,060 aircraft mechanics, 642 pilot training schools, 193 maintenance schools, 665,000 active pilots, and 184,400 active general aviation aircraft. FAA inspectors perform four principal functions: (1) airline operation certification, (2) routine surveillance (a process of periodic inspections of airlines and aviation-related activities), (3) accident and incident investigations, and (4) safety promotion. FAA divides its surveillance or inspection activities into three categories—operations, maintenance, and avionics. Operations inspections focus on such items as pilot performance, flight crew training, and in-flight record keeping. Maintenance inspections examine the airline’s overall maintenance program, including personnel training and established policies and procedures. Avionics inspections focus on the condition of electronic components of the aircraft. To carry out its aircraft certification responsibilities, FAA has about 825 engineers and others to oversee the certification of new aircraft and the continued airworthiness of the existing fleet. To assist its engineers, FAA also delegates certification activities, as necessary, to designated, FAA-approved employees of manufacturers. The FAA engineers, in turn, oversee the activities of these designees. The size of FAA’s inspection and certification workforce, while allowing it to perform its “must do” work, has prevented it from completing other important aviation oversight activities that it designates as “should do.” To assist FAA in maximizing the efficiency and effectiveness of its limited workforce, we have long encouraged it to better focus its inspection activities on those entities and areas that pose the greatest risk to aviation safety. In 1987, we recommended that FAA, in addition to having minimum standards for the type and frequency of airline inspections, target airlines displaying risk precursors (that is, characteristics that may indicate safety deficiencies). Again in 1988, we reported that by monitoring risk precursors, FAA could target for intensive inspection those airlines most likely to experience safety compliance problems, thereby improving the quality of information available on the airlines’ compliance with regulations. Similarly, we reported in 1991 that FAA needed a mechanism to make more effective use of its limited resources. We further reported that a system that systematically and uniformly determined risks could provide FAA with information vital to enhancing its inspection program. Finally, we recommended, in 1993, that FAA develop criteria for targeting inspections on high-risk conditions. FAA’s response to our findings on its inspection program was to develop an automated decision support system for FAA managers, safety inspectors, and certification engineers in headquarters and field offices. This system, begun in February 1991 and designated as SPAS, is planned to be a user-friendly tool for (1) quick analysis of safety-related data,(2) generation of standard and ad hoc indicators (that is, precursors) of safety performance, and (3) identification of safety-related risk areas for investigation, either through analysis of the underlying data used to generate the risk precursor or through on-site inspection of the risk item. FAA decided in early 1991 to develop an aviation safety performance analysis system to aid it in managing its inspection program. In May 1993, FAA completed development and installation of the initial SPAS prototype at 12 field offices, FAA headquarters, and the Air Force’s Air Mobility Command. By the end of 1995, FAA plans to have developed the first operational SPAS release, which is to offer additional functions and performance capabilities above those of the prototype, such as the ability to look at the source data behind the indicators. This first operational release is to be installed at up to 30 locations. Development of the final operational SPAS release is scheduled to be completed in late 1997. This version is to be deployed to as many as 140 locations. To date, FAA has spent $6.3 million on the initial and enhanced prototypes. FAA estimates that SPAS will cost a total of $32 million to develop and install. SPAS is to have a powerful graphical user interface that displays performance indicators in such a way that users can easily spot areas for further inquiry. FAA plans four categories of indicators or risk precursors: (1) air operator, (2) air agency (for example, flight and mechanic schools, aircraft repair stations, and so forth), (3) aircraft, and (4) air personnel. In developing the indicators, FAA is focusing first on air operators and air agencies. To date, 25 indicators have been developed and are being generated by the SPAS prototype—19 for air operators and 6 for air agencies. FAA has established a SPAS program office, within the Office of Flight Standards, to manage SPAS. The program office is supported by the FAA Technical Center in Atlantic City, New Jersey. The Technical Center, in turn, has contracted with the Department of Transportation’s Volpe National Transportation System Center (VNTSC) for technical and analytical support, such as developing and evaluating the SPAS prototypes and defining the safety indicators. VNTSC has contracted with UNISYS Corporation to provide SPAS hardware and develop applications software in accordance with defined user requirements. Overall SPAS program guidance and direction is provided by the SPAS Steering Committee, which is chaired by the SPAS program manager and includes representatives from four FAA regions, the FAA Office of Integrated Safety Analysis, and the Department of Defense. The Steering Committee’s responsibilities include defining systems requirements, approving SPAS indicators, and monitoring system development and implementation. The SPAS program office is also supported by air operator, air agency, aircraft, and work program planning expert panels. These panels are responsible for defining and proposing indicators, identifying data sources for generating these indicators, and reengineering the inspection functions in light of SPAS capabilities. We reviewed SPAS because of our long-standing interest in helping FAA to improve its inspection and certification programs. Our objectives were to determine (1) whether FAA is effectively managing the SPAS acquisition, including its communication network, and (2) the extent to which SPAS will rely on Aviation Safety Analysis System (ASAS) databases and whether FAA is effectively addressing known data quality problems with these databases. To accomplish our first objective, we interviewed SPAS program management about guidance governing the acquisition, and we reviewed this guidance to ensure that it provided a reasonably structured and disciplined basis for acquiring SPAS. Our review included analyzing the guidance relative to Office of Management and Budget (OMB) Circular A-109 and our 1994 report addressing how leading organizations manage information technology investments. We then interviewed program and contractor officials and reviewed system development documentation and plans to determine whether actual SPAS development processes and practices were consistent with the guidance and whether these processes and practices were exposing the program to unnecessary risks. In particular, we focused on system requirements analysis and definition, verification and validation, cost estimating, system architecture alternatives analysis, and communications planning. We also interviewed SPAS users at four field locations that are currently operating the SPAS prototype to determine their involvement in defining SPAS requirements and their reaction to and satisfaction with the prototype. These four sites were the Flight Standards Division of the Western Pacific Region; the Van Nuys, California, FSDO; the San Jose, California, FSDO; and the Bedford, Massachusetts, FSDO. We also witnessed the operation of the prototype at these locations, and operated the prototype at the contractor’s facility in Cambridge, Massachusetts. In addition, we reviewed available SPAS program management and system development documentation, such as the SPAS Functional Description Document, SPAS working group minutes, the SPAS verification and validation contract proposal, SPAS cost estimate and budget requests, and SPAS alternative architectures analysis. In addition, to ascertain acquisition plans and whether these plans would satisfy SPAS needs, we interviewed program and contractor officials, as well as FAA officials responsible for acquiring FAA-wide and Office of Flight Standards communication networks. In doing so, we discussed SPAS communications requirements and steps underway to satisfy them. To accomplish our second objective, we interviewed SPAS program officials and Office of Flight Standards information resource management officials and reviewed SPAS documentation to determine what FAA and non-FAA databases will be used to generate SPAS indicators. We then discussed with these officials the accuracy, completeness, and consistency of the data residing on these databases and what plans and initiatives are underway to address any data quality shortfalls and what assurances they had that any quality problems would be addressed in time for SPAS deployment. We also reviewed published GAO and FAA reports and studies on the quality of the data in these databases, and interviewed Office of Flight Standards officials as to the status of actions to address any recommendations made. We conducted our audit work primarily at FAA headquarters in Washington, D.C.; and VNTSC and UNISYS Corporation in Cambridge, Massachusetts. We also communicated frequently with the FAA Technical Center in Pomona, New Jersey. Throughout our review, we discussed our preliminary results with the Director of the Office of Flight Standards. In addition, the Department of Transportation and FAA provided oral comments on a draft of this report. Their comments and our evaluation of these comments are contained in chapters 2 and 3 of this report. Additional comments provided on the contents of the draft report have been incorporated as appropriate throughout the report. We conducted our work between November 1993 and November 1994, in accordance with generally accepted government auditing standards. Overall, FAA has handled key aspects of SPAS development reasonably well. In particular, its analysis and definition of SPAS requirements provided for user involvement and effectively used prototyping techniques. Moreover, recent changes to the FAA standards governing the acquisition of SPAS provide important structure and discipline that, if adhered to, should reduce SPAS development and deployment risks. Also, FAA’s decision to employ an independent verification and validation agent should further mitigate system development and acquisition risks. Last, FAA’s decision to not acquire duplicative data communication networks to support SPAS and other systems should save precious resources. However, opportunities exist to improve the SPAS cost estimates, and thus FAA management’s ability to make sound system investment decisions. One of the most difficult and challenging aspects of any systems development effort is accurately and completely identifying and documenting requirements of system users. To do so successfully requires a commitment on the part of management and system developers to involve users continuously throughout the system development process. That is, the agency must recognize that user requirements cannot be accurately defined at the beginning of the development process. Instead, effective requirements definition demands a more iterative process in which requirements are continuously analyzed, validated, and refined through constant interaction with users. FAA’s approach to analyzing and defining SPAS requirements has involved a series of steps to maximize user involvement and provide users with early “looks” at the system for evaluation and reaction, thereby better ensuring that the system will meet their needs. These steps first began in May 1991 when the SPAS Steering Committee distributed a questionnaire to 1,000 aviation safety inspectors to solicit their views on what type of automated tool would best serve their needs. The questionnaire contained a variety of questions, such as: How could an automated system help with your work? and What features would you like to see? On the basis of the 375 survey responses received and experience with the Air Force’s airline safety analysis system, FAA and VNTSC generated an initial set of SPAS requirements. Next, FAA began validating and refining the requirements. First, FAA, in collaboration with VNTSC, held a series of group discussions and one-on-one visits with aircraft safety inspectors throughout the country. According to SPAS documentation, these discussions and visits allowed SPAS developers to see first-hand what the inspectors do on a daily basis and to listen to their ideas, thus giving the developers a keener understanding of the inspectors’ needs and helping them to refine SPAS requirements accordingly. In October 1991 and December 1993, two expert panels consisting of dozens of experienced aviation safety inspectors and members of the SPAS management team were established to determine whether users’ needs were adequately being captured. These panels were charged with developing and recommending SPAS safety indicators and revalidating SPAS functional requirements. To further validate SPAS requirements, FAA’s next step was to implement the SPAS Steering Committee’s recommendation to build a prototype system for users. System prototyping is an effective method of defining and refining user requirements. By quickly providing users with a system model (that is, something less than the full complement of envisioned system features and functions) with which to interact and react, prototyping allows needed adjustments to be made before making large investments in developing the final system. In our 1994 report on how leading organizations improved mission performance through strategic information management and technology, we noted that these organizations make effective use of rapid prototyping to minimize system risks and maximize benefits. The SPAS prototype evaluation focused on the effectiveness and ease of use of the user interface, the adequacy of source systems’ data quality, and the impact of SPAS on inspectors’ daily activities. In late 1993, at the conclusion of the SPAS prototyping phase, FAA was scheduled to discontinue prototype support. However, FAA elected to continue the prototype to help in ongoing requirements refinement, early user familiarization with SPAS, and testing of new SPAS concepts. In March 1993, FAA issued Order 1810.1F, which established its policy for initiating and managing acquisition programs like SPAS. Prior to 1810.1F, the SPAS program office was following VNTSC’s Information Systems Development guidelines for development and acquisition programs. The program office has elected to supplement the VNTSC guidelines with Order 1810.1F. Our 1994 report on how leading organizations improve mission performance through strategic information management and technology emphasized the importance of using a disciplined process to develop and acquire information systems—one that uses explicit decision criteria, assesses benefits and costs, and involves senior program and information managers in key system decisions. We reviewed FAA Order 1810.1F and believe it is a reasonably disciplined and organized system acquisition and development process, which if followed, could benefit programs by reducing the potential for cost growth, schedule delays, and performance deficiencies. In particular, 1810.1F imposed valuable rigor and structure on the SPAS acquisition process by applying the principles embodied in OMB Circular A-109 on major systems establishing clear lines of responsibility, authority, and accountability; requiring user and sponsoring office participation throughout the acquisition process, including at key decision points; directing that mission needs be established at the beginning of the acquisition process and then revalidated at critical decision points throughout the remainder of the process; mandating that alternative technological approaches be analyzed prior to selecting a final development strategy; and tailoring the acquisition requirements to the size, complexity, and nature of each specific program. FAA Order 1810.1F specifies five phases and four related key decision points. Each phase produces the documentation needed to make decisions at the next decision point. SPAS is currently in phase I of the 1810.1F acquisition process, with the second decision point scheduled for September 1996. Documents required for this decision point, such as the program master plan and a cost-benefit analysis, are currently being prepared and thus were unavailable for review. Verification and validation involves analyzing and testing a system throughout its development to ensure that it meets specified requirements. The purpose of verification and validation is to better ensure the final system’s performance, integrity, reliability, and quality. Verification and validation activities are advocated by industry software standards and federal guidelines for software management, especially for systems that involve the safety and preservation of human life. When verification and validation activities are performed by an organization separate and distinct from the system developers, the additional benefit of independence accrues. This is referred to as independent verification and validation (IV&V). During the course of our review, FAA began using an IV&V agent as a risk mitigating technique. In June 1994, it contracted with Sandia National Laboratories to examine issues relating to (1) system architecture, such as scalability, vulnerability, and robustness, (2) network and server capacity, and (3) system operation. Sandia has also subcontracted with the University of Nevada at Las Vegas for evaluation of the indicators’ appropriateness. FAA has long recognized that its communications infrastructure could not satisfy the functional (for example, video conferencing) and performance (for example, response time) requirements of SPAS and other applications. To address this shortfall, FAA is acquiring a corporate wide area network (WAN), called the Administrative Data Transfer Network (ADTN) 2000. This network is intended to satisfy not only current FAA requirements for non air traffic control and administrative communications, but also to accommodate growth in communication demands through capacity expansion and technology infusion. On September 19, 1994, FAA’s Telecommunications Division awarded a 5-year contract for ADTN 2000 services. Current plans call for the network to be operational by Spring of 1995. In addition to the agency WAN, FAA’s Office of Regulation and Certification, which includes the SPAS user community, was planning its own, independent WAN, called the Aviation Information Exchange Network (AIX). According to officials in this office, AIX was pursued because ADTN 2000 was already 18 months late and they believed that further delays would occur. In June 1994, we raised questions about duplicative WANs and the lack of coordination between these two acquiring organizations. As a result of our inquiries, and to FAA’s credit, the two organizations began meeting. Consequently, the Office of Regulation and Certification later agreed to first evaluate whether ADTN 2000 could meet its communication needs before deciding if it would acquire its own, separate network. Our 1994 review of how leading public and private organizations use information technology to improve mission performance showed that these organizations rely heavily on performance measures to, among other things, make informed system life-cycle choices, allocate resources, track progress, and learn from mistakes. One area that these organizations’ standard measurement practices focused on was resource consumption, which requires that reliable estimates of resource needs be developed and used. The estimated cost of a system’s software is one of the more critical of these resource estimates. To develop reliable software cost estimates, industry practice is to employ one or more structured cost estimating techniques or methods, augmented by the judgement of software experts. While the estimates derived using these methods are not precise, they are more credible than relying solely on the subjective opinions of experts that are unsupported by any objective, verifiable analysis. FAA’s current cost estimate for developing and installing SPAS is $32 million. According to SPAS program officials, the software component of this estimate was derived 3 years ago based on the subjective judgment of the technical program manager and two contractor officials. No systematic software cost estimating tool or technique, such as COCOMO (Constructive Cost Model), REVIC (Revised Intermediate COCOMO), or SLIM (Software Life Cycle Intermediate Model), was used. Further, the program has no documented analysis to support the software cost estimate, and it has not attempted to update it. Program officials told us that they relied on the judgment of contractor and program experts in estimating SPAS software costs because they have been unable to identify a reliable cost estimating tool or model appropriate for systems like SPAS, which employs a client-server based architecture. Because of the manner in which the SPAS cost estimate was derived, the reliability of the estimate is uncertain; thus, any decisions made regarding SPAS that rely on this estimate may prove to be ill-advised. For example, to comply with FAA Order 1810.1F, the program office asked FAA’s Office of Operations Research to conduct a SPAS cost-benefit analysis. The program office plans to provide its SPAS cost estimate to serve as the basis for this analysis. Because the cost estimate is not credible, any cost-benefit analysis that relies on it will also not be credible. FAA’s Office of Operations Research recognizes the limitations in software cost estimates that are not based, at least in part, on formal cost estimating techniques. According to a representative for the contractor performing the SPAS cost-benefit analysis for this office, the SPAS cost estimate provided by the program office will be validated using structured estimating techniques before it is used in the cost-benefit analysis. Key aspects of FAA’s management of the SPAS acquisition are reasonably sound. The steps the program office has taken to involve users in defining requirements and evolving the prototypes are appropriate. In addition, the new FAA system acquisition requirements can bring added rigor and discipline to the SPAS development process. Further, the program office’s decision to employ an outside party to verify and validate development activities should prove beneficial. Last, FAA’s steps to avoid unnecessarily duplicative WANs for SPAS and other systems are judicious and may save scarce acquisition and operation and maintenance money. However, we believe that opportunities exist to strengthen the program office’s cost estimating techniques and thus its ability to measure performance and make informed investment decisions. We recommend that the FAA Administrator direct the Associate Administrator for Regulation and Certification to ensure that SPAS software costs are estimated using systematic and rigorous estimating techniques and methods. In commenting on a draft of this report, FAA officials disagreed with our recommendation, although they agreed that the SPAS cost estimate needs to be updated. The officials stated that FAA’s approach to estimating SPAS software costs (that is, relying solely on the judgments of experts) is consistent with agency guidance. However, they added that this guidance does not specifically address software cost estimating. They also stated that the cost estimating models mentioned in our report are more appropriate for mainframe systems rather than client-server based systems, such as SPAS. Instead, the officials said that they have recently identified a software cost estimating tool that they believe is applicable to SPAS and that they are now evaluating and may acquire and use. We are encouraged by these statements. Our intent was to neither advocate a particular tool nor to ignore the value of expert judgment in deriving software cost estimates. Rather, our aim was to convince FAA to follow accepted industry cost estimating practices of augmenting expert judgment with the kind of objective, verifiable analysis that structured estimating techniques and methods offer. To produce its indicators, SPAS will use data from a myriad of existing FAA databases. Because these data have been and continue to be incomplete, inconsistent, and inaccurate, the utility of SPAS is threatened. FAA initiatives underway to improve source data quality are insufficient to ensure that SPAS will receive the data it needs in 1997 to be effective. As currently envisioned, SPAS could eventually rely on over 25 databases within FAA, other government agencies, and the aviation industry. The largest single source of data will be FAA’s Aviation Safety Analysis Subsystem (ASAS). ASAS is an umbrella term used to describe a collection of 34 largely independent FAA databases. Generally, the nature of these databases falls into one of five categories—repository of data on various components of the aviation industry, repository of data on FAA personnel, tools for managing inspector/investigator workload, reference sources for FAA regulations, and an oversight tool for FAA senior management. The current SPAS prototype relies almost exclusively on two ASAS databases—the Program Tracking and Reporting Subsystem (PTRS) and the Vital Information Subsystem (VIS)—in generating its current complement of indicators. PTRS contains data on planned inspections of airlines and aircraft, as well as the results of these inspections. The data are entered by inspectors or support personnel and are used to inform FAA management of inspection activities. VIS contains key data on airlines, pilot and mechanic schools, repair stations, and FAA designees (that is, people and organizations that FAA empowers to act as surrogates for it in discharging specific FAA responsibilities). These data are entered by inspectors or support personnel and are used to track aviation activity. As the number of indicators expands, FAA plans to use data from other ASAS databases. (See appendix I for description of each of the potential SPAS source data systems.) The quality of SPAS’ outputs, and thus its utility in supporting FAA decisionmakers, depends on the quality of its inputs. FAA fully recognizes this. In fact, the Office of Flight Standards Five Year Information Strategy states that information and its quality are at the heart of SPAS’ success. Similarly, an Office of Flight Standards expert panel on data quality stated that SPAS needs a sound foundation from which to analyze, and this foundation must be in the form of reliable databases that are correct, complete, and consistent. Also, a Flight Standards Working Group stated that for advanced tools like SPAS, the data on which they operate must be correct, consistent, complete, and up-to-date, or the results will be meaningless—or even misleading. Despite the criticality of reliable source data to SPAS’ success, the poor quality of the data on the FAA databases that SPAS will use remains a serious problem today. In our 1988 report on the feasibility of assessing safety levels of individual airlines, we concluded that none of the potential source databases could provide a satisfactory basis for developing safety indicators because the data were unreliable, incomplete, and inconsistent. At that time, one major airline described FAA’s data on aircraft accidents, incidents, and serious malfunctions as, for the most part, worthless. In 1989 and 1991, we reported on inaccurate and incomplete inspections data in PTRS. In its response to the 1991 report, FAA agreed that PTRS was inaccurate and incomplete. In fact, an FAA-sponsored study that year concluded that PTRS cannot be used for problem diagnosis and trend analysis with any degree of reliability until data quality issues were resolved. Similarly, a 1992 Flight Standards expert panel, established to identify ways to improve the quality of PTRS data, reported that PTRS did not contain reliable, consistent data. The panel made recommendations for improvement. While FAA has recognized its data quality problems for years and has taken some steps to address them, the problems still persist. According to 1993 SPAS documentation, many FAA databases continue to have data quality and consistency problems, critical data elements are still missing or contain erroneous data, and supporting documentation is either out-of-date or missing. A 1993 Flight Standards working group on data quality improvement also reported that the data quality problem of FAA safety-related databases remains as much an issue today as it was more than 5 years ago. Also, a 1994 Department of Transportation Inspector General report states that the database containing data on inflight “service difficulties” is neither complete nor current. For example, the report states that omissions in different data fields for each “service difficulty” occurrence in the database as of January 1993 ranged from 46 to 98 percent. In November 1994, SPAS program officials affirmed these reports by stating that the quality of data residing on the SPAS feeder databases remains a major risk item for the system. Despite FAA’s recognition of both SPAS’ need for quality source data and its lack of such data, FAA has not developed a coordinated strategy for rectifying the situation. We reviewed the 1992 Five Year Flight Standards Information Strategy, and found one broad goal in this area—to “ensure quality data for decisionmaking.” We further found that “development of measurement tools to assess and improve data quality of SPAS feeder systems” and “begin data needs analysis of existing processes” were the extent of planned actions to accomplish the goal. We did not find a comprehensive strategy that (1) clearly defines measurable, interim, and long-term goals for improving data quality, (2) specifies the full extent of the problem being addressed, (3) is supported by a series of specific steps designed to meet the stated goals according to a specified schedule, and (4) designates the organizations responsible for executing the strategy and provides the associated authority and resources for doing so. Officials with the SPAS program office, the Flight Standards Training and Automation Committee, and the Office of Flight Standards’ Information Resources Management function acknowledged that no strategy exists. Further, officials with the former two organizations stated that FAA has not yet determined what level of data quality is needed from each of the source databases. In other words, FAA has not agreed on a definition of its long-term data quality goals. Instead, these officials pointed to a few independent data improvement measures, some of which are being performed by the SPAS program office even though it is not responsible for these source databases. These measures include implementation of select recommendations made by an Office of Flight Standards working group and VNTSC on PTRS and VIS data quality improvement, development of an automated tool for measuring the quality of data residing on PTRS and VIS (the tool may eventually be applied to all SPAS source databases), and revision of the PTRS and VIS users manuals. While we do not question the merits of these initiatives, they neither individually nor collectively represent the type of coordinated and comprehensive effort that can ensure that SPAS will receive the data it needs when the system is deployed in 1997. In commenting on a draft of this report, the Director of the Office of Flight Standards agreed that such a strategy is needed. The axiom “garbage in, garbage out” applies to SPAS. This system will not be effective if the quality of its source data is not improved. Moreover, it could potentially misdirect FAA resources away from the higher risk aviation activities. While FAA has some initiatives underway to improve some of these data, the initiatives are isolated, incomplete, and provide little assurance that SPAS will receive the quality data it needs to be useful. Unless FAA acts quickly on this matter, SPAS will not be able to perform as intended when it is deployed in 1997. FAA must expeditiously develop a comprehensive and coordinated strategy for defining and attaining defined data quality improvement goals within specified time frames for all SPAS source databases. We recommend that the FAA Administrator direct the Associate Administrator for Regulation and Certification to require the Office of Flight Standards to develop and implement a comprehensive and coordinated strategy, specifying how the quality of all data residing on SPAS source data systems will be brought up to the minimum level needed for SPAS to meet operational requirements. At a minimum, this strategy must include (1) clear and measurable data quality objectives for each SPAS source data system that recognize the sensitivity of SPAS’ various analyses to the respective source data inputs, (2) accurate assessments of the current quality of the data on each SPAS source data system, (3) clear statements of organizational responsibility and authority for improving the source systems’ data quality, (4) both interim and long-term milestones for attaining stated quality objectives that tie closely to SPAS development schedules, and (5) estimates of resource requirements to meet stated objectives and agency commitments to providing these resources. In commenting on a draft of this report, FAA officials agreed with our recommendation to develop and implement a comprehensive and coordinated strategy for improving the quality of the data residing on SPAS source databases. However, they do not believe that the data quality problem is as severe as our report describes it to be. They stated that the quality of the data has measurably improved over the last several years. To support their position, they cited various steps taken to strengthen the completeness, correctness, currency, and consistency of the databases. For example, they said that data entry edit checks have been introduced and that inspectors are now more conscious of the consistency of the data they enter. However, they could not provide any data, analysis, or otherwise verifiable evidence supporting their claims. While we acknowledge that the steps cited should produce some quality gains, without evidence of actual improvements we believe that the quality of the data on which SPAS will rely remains a significant problem. As stated in our report, FAA and Department of Transportation analyses as recent as 1993 and 1994 continue to report severe aviation safety related data limitations.
Pursuant to a congressional request, GAO reviewed the Federal Aviation Administration's (FAA) Safety Performance Analysis System (SPAS), focusing on: (1) whether FAA is effectively managing the SPAS acquisition; (2) the extent to which SPAS will rely on Aviation Safety Analysis System (ASAS) databases; and (3) whether FAA is effectively addressing known data quality problems with the ASAS databases. GAO found that: (1) FAA has generally implemented good development and acquisition procedures for SPAS; (2) FAA has maximized user involvement and system prototyping in developing and evaluating SPAS; (3) FAA has reduced SPAS development risks by using an independent verification and validation agent; (4) FAA is exploring the potential of its proposed corporate-wide area network to accommodate SPAS in order to avoid the acquisition of duplicate communication networks; (5) FAA cost estimates for SPAS software may not be reliable, since they are subjective; (6) FAA lacks a strategy for improving SPAS data sources, particularly ASAS, which jeopardizes the system's utility; (7) ASAS databases contain incomplete, inaccurate, and inconsistent data on airline inspections; (8) FAA has not yet defined its long-term data quality goals; and (9) if FAA fails to improve ASAS data, it could improperly target its limited inspection and certification resources on less important problems.
Armed forces must be trained and ready in peacetime to deter wars, to fight and control wars that do start, and to terminate wars on terms favorable to the U.S. and allied interest. Historical experiences indicate that there is a correlation between realistic training and success in combat. Hence, training should be as realistic as possible to prepare troops for combat. Service training guidance emphasizes the importance of live fire training to create a realistic combat scenario and to prepare individuals and units for operating their weapons systems. U.S. forces are required to train for a variety of missions and skills. This training includes basic qualification skills such as gunnery and higher-level unit operational combat skills. Service training requirements typically require the use of air ranges for air-to-air and air-to-ground combat, drop zones, and electronic warfare; live-fire ranges for artillery, armor, small arms, and munitions training; ground maneuver ranges to conduct realistic force-on-force training at various unit echelons; and sea ranges to conduct ship maneuvers for training. To achieve required training, non-CONUS forces use a variety of training areas and ranges that are generally owned by host governments. Ideally, forces conduct the majority of their required training at home station using local training areas or operating areas. However, non-CONUS forces have historically relied on a combination of instrumented training ranges away from home station, major training centers, CONUS training exercises, and multilateral training exercises with countries within their theater to obtain their required training. This includes the Navy and the Marine Corps, which have no permanently stationed combat forces in Europe and no fixed access to training ranges in the European theater. We have previously reported that the size of home station training areas available to units varies greatly, particularly between units stationed overseas and those in the United States. For example, we reported that local training areas for units stationed in Germany have historically varied in size from 3 acres to 8,000 acres, with divisional units not always housed at the same location. In the United States, we reported that individual installations vary, but far more land is available and typical installations may vary in size from just under 100,000 acres up to more than one million acres. While this report’s focus is exclusively on training constraints outside CONUS, both we and the Department of Defense (DOD) are examining constraints on CONUS training. At the request of the House Committee on Government Reform, we are reviewing the effects of environmental and commercial development restrictions on key training areas within the 48 contiguous states and whether DOD is effectively working to address these issues. In addition, DOD is in the process of determining the extent of the training problems at CONUS facilities. DOD’s Senior Readiness Oversight Council initiated a sustainable range initiative spearheaded by the Defense Test and Training Steering Group. The initiative’s purpose is to develop and recommend a comprehensive plan of action to ensure that the department maintains range and airspace capabilities that support DOD’s future training needs. In November 2000, the steering group submitted a sustainable range report to the Oversight Council followed by the publication of nine action plans that addressed eight training-related issues confronting CONUS training and an outreach plan. Currently, DOD’s efforts have focused almost exclusively on CONUS training. There is no consolidated DOD-wide listing of non-CONUS training ranges and their associated limitations. Some services have started collecting this information, but a complete inventory is not yet available. Unlike CONUS-based forces, which conduct their company level and below training at home-station, none of the permanently stationed non- CONUS combat units are able to meet all their company-level and below training requirements at home station. According to service doctrine, home-station training should support company-level and below training requirements. Non-CONUS combat units have the most difficulty meeting their training requirements for (1) maneuver operations, (2) live ordnance practice, and (3) night and low altitude flying. These difficulties arise because both the European and Pacific units’ home-station training locations are not large enough to conduct specific ground maneuvers on a regular basis; are limited in the types of munitions or use of live fire or both; and are restricted in terms of flight hours, altitudes, and electronic frequencies allowed. Some restrictions are long-standing, while others are more recent. In many cases, the increase in restrictions facing U.S. forces is the result of the growing commercial and residential development on or near previously established training areas and ranges. The construction itself, including residential and agricultural development within training ranges, has forced some ranges to close, reduced the training capability at others, and often delayed training on those that remain. Continued growth and host nation concerns may result in further restrictions in the future. In many instances non-CONUS-based units’ home-station local training areas are not large enough or are inappropriate for certain operations. To make training as realistic as possible, many exercises require specific terrain or large maneuver areas. However, in both Europe and the Pacific U.S. ground forces lack enough space and/or the appropriate terrain to train at their home stations. Following are several examples of such limitations. The Army in Germany has historically had limited local training areas available for units to engage in home-station training. The Army recognizes only 7 of the 61 identified local training areas as having all the characteristics of a local training area. Over the past decade, as part of the Army’s practice of being a good neighbor, there has been a shift toward using designated areas as opposed to large open areas on private land, which has further lessened the amount of land available for training. Although, the Army has limited local training areas, it has been able to conduct all its required training using a combination of training areas within Germany. Figure 7 in appendix II is a map showing the locations of major units and training facilities in Germany. Army units in Italy also have a limited number of local training areas to conduct home-station training, and for some types of mission training the terrain there is inappropriate for the desired training. Army officials based in Italy said that there were only a few instances where training was constrained at some local training areas. One local training area does not allow the soldiers to train on their High Mobility Multipurpose Wheeled Vehicles. A second local training area is coming under pressure from increased recreational use by the local population. Specifically, during summer 2001, a portion of this training area was completely closed because the area abutting it is becoming increasingly popular for hikers. Army officials expressed concern that they may lose more of the training area in the future. Regarding having the right terrain, while Army units in Italy are expected to operate in wooded areas, soldiers told us that during some exercises they pretended to be moving through a wooded area hiding behind trees when in actuality they were moving through an open field at their local training areas. Figure 8 in appendix II is a map showing the locations of major units and training facilities in Italy. In Korea, the Army’s local maneuver areas are inadequate in size to support platoon and company training events, which has been a long- standing problem. While the local training areas have always been inadequate to support training events to Army standards, the areas available for training are shrinking as the population in or around the training areas increases. Figure 10 in appendix II is a map showing the locations of major units and training facilities in Korea. In Japan, local training areas on Okinawa are too small to support the Marine Corps’ maneuver-training requirements. Only small-unit elements can maneuver together. Large force elements that would normally be in close proximity to each other and maneuver together must break into small groups, disperse among the island’s training areas, and maneuver independently. Further, maneuver training that ideally would be conducted in a continuous, uninterrupted manner must be started and stopped as units move from one non-contiguous training area to another. Training constraints have further increased as a result of the 1996 Special Action Committee on Okinawa agreement, which returned the Yomitan Auxiliary Airfield—the site previously used to conduct parachute drop training—to Japan and terminated nearly all artillery training on the island. Most battalion exercises and parachute drops, which require troops to conduct maneuver exercises after being dropped, have been relocated off Okinawa. Marine Corps officials told us that it is becoming increasingly difficult to obtain maneuver training on Okinawa. Figure 11 in appendix II is a map showing the locations of major units and training facilities in Japan. Many local training areas in both Europe and the Pacific prohibit the use of live munitions or specific weapon systems. DOD officials have repeatedly expressed the need for live-fire to make training realistic preparation for combat. Many live-fire restrictions were implemented because development and population growth near the training ranges reduced the areas available for safety zones and led to noise complaints from nearby residents. Following are examples of such restrictions. In Germany, for decades Army unit-level personnel have had difficulty in conducting live fire training at home station except for small arms because of the prohibitions on live fire in those areas. Army units have historically gone to the Grafenwoehr Training Area—the Army in Europe’s principal live-fire training area—to conduct live fire training on their major weapons such as tanks and artillery. Regarding Grafenwoehr’s sufficiency for future advance munitions, Army officials told us that they plan to upgrade the training area to accommodate all munitions that will be used by Army in Europe units. Both the Army and the Air Force in Italy have restrictions on live fire training. There are such restrictions at nine of the Army’s ten local training areas and firing ranges. The Air Force’s fighter wing in Italy does not have a local air-to-ground range for bombing training although bombing is one of its primary missions. The lack of an air-to-ground range is a long- standing problem and prevents the wing from conducting surface attack training in Italy. The two F-15E squadrons in the United Kingdom cannot employ laser- guided bombs on any of their local ranges. Laser-guided bombs are the primary munitions used for air-to-ground attacks by these squadrons. Although these squadrons have regular access to air-to-ground ranges for non-laser-guided bombing, the ranges are not considered quality tactical training ranges that allow pilots to train for identifying and engaging targets. Figure 9 in appendix II is a map showing the locations of the fighter wing and training facilities in the United Kingdom. When in Europe, Navy units have limited access to training for live fire combined arms, supporting-arms coordination, and naval gunfire support—all of which are capabilities that the Navy tries to maintain at a certain level while deployed in theater. As a result, they rely on bilateral exercises, use of other country’s ranges or North Atlantic Treaty Organization (NATO) exercises to attain training. Both the Air Force and the Army in Korea face restrictions on live fire training. The capabilities at Koon-ni, the Air Force’s only exclusive-use range, have steadily diminished over time. Prior to 1978, live bombs were dropped on the target island, practice bombs were dropped on the mainland, and strafing was conducted on a scored land target. In 1978, live bombing was discontinued. Over the years, commercial development has moved within the range’s safety easement zone. By 1989, practice bombing was restricted to the target island and in 2000 the scored strafing pits were closed. Figure 1 is a photo of a steel mill constructed within the zone during the time that strafing was still allowed. As of April 2002 only training ordnance is allowed and can only be used over water. For the Army, its live-fire Story Range Complex does not have a safety easement zone sufficient for some of its longer-range weapons, such as the Multiple- Launch Rocket System and the Palladin. In addition, farming and structures—such as houses, a greenhouse, and power lines—lie within the range’s boundaries. Army officials said they frequently find farmers on the range, but they are working with the Korean government to fence this range to keep farmers out of those areas. Figure 2 shows a picture of a local farmer harvesting rice inside the impact area at the Story Range. These farmers have to be removed for obvious safety reasons before the Army can use the range, which causes delays in training. In Japan, the Navy’s ships and aircraft face live-fire restrictions at their local training facilities. The Navy is unable to conduct two of its surface anti-air warfare exercises due to inadequate target support facilities. These exercises are for the Rolling Airframe Missile against a subsonic cruise missile target and for the standard missile against the Supersonic Sea Skimming Target. The Navy ships typically use Farallon de Medinilla, a range about 1,400 miles from Tokyo, to train aircraft in their air to ground deliveries and for surface ship naval gunfire support. Pacific Command officials said that in March 2002, a Federal judge held that the incidental killing of migratory birds at this range violated the Migratory Bird Treaty Act and that a hearing is scheduled for April 30, 2002, to determine if operations at this range will be enjoined. Furthermore, the Pacific Command said that if the Navy loses use of this range, serious degradations in readiness will be expected within six months unless an alternative range is found. In addition, the Navy’s carrier air wing faces constraints on its ability to conduct live fire training. Because of the close proximity of the wing’s home base at Atsugi (a suburb of Tokyo) to the local population, live munitions are not allowed to be stored at Atsugi or to be carried by aircraft departing the runway. Consequently, the wing’s aircraft have to take off from Atsugi, land at another base in Japan to load munitions, and then continue on to other ranges to conduct their live fire training. In addition to training constraints on mainland Japan, on Okinawa the Marines have limited live-fire range capabilities at their local training areas. For the Marines, the ranges throughout the island have fixed firing points that do not allow tactical firing. Figures 3 and 4 show examples of the fixed firing points at two of the ranges. As a result, Marines can train to fire in only one direction as opposed to firing in any direction, which would be the most likely situation in combat. While these ranges can help a new Marine become familiar with his weapon, they cannot provide realistic or qualification training. In addition, since the early 1990s, the Marines’ ability to conduct artillery firing on Okinawa has steadily diminished and as previously noted was discontinued altogether in 1996. The government of Japan now pays for the Marines to conduct their artillery firing training on the Japanese mainland four times a year. However, Marine Corps officials told us that one of the artillery ranges used on the mainland, Camp Fuji—a co-use training area in northern Japan—is restricted and that artillery is not being trained as robustly as it was on Okinawa. Land limitations and environmental concerns restrict live fire training in Alaska and Hawaii. In Alaska, the artillery, mortar, and Tube-Launched, Optically-Tracked, Wire Command-Link Guided missile (TOW) firing area at Fort Richardson is unavailable to units 6 months a year (during the warmer months). In addition, the local training areas are insufficient to support cavalry gunnery, air-defense artillery-platoon “Stinger” missile ground-to-air gunnery, TOW, and the MK-19, an automatic grenade launcher. In Hawaii, the Makua live fire range complex on Oahu was closed from September 1998 to October 2001 because of environmental concerns raised by the local population. Consequently, during this time the Army and Marines were unable to conduct company live fire exercises at home station. According to Army in the Pacific officials, the Makua range complex is now open for limited use under the terms of a lawsuit settlement. According to these officials, it is unlikely that the Marines will be able to use this range for the next 3 years because the settlement agreement limits the number of annual training events. Forces in both Europe and the Pacific are not able to conduct all their aviation training events using their local training areas due to a variety of airspace restrictions. For aviators in both theaters, air space restrictions limit the ability to accomplish required training; thus limiting pilots and aircrews’ proficiency in some areas. Although some restrictions are long- standing, Air Force personnel told us that airspace throughout Europe and the Pacific is becoming increasingly congested, adding to the difficulty in completing training. Following are examples of airspace restrictions. The Air Force units stationed in Germany have limited local air space available, and altitude restrictions prohibit flying below 1,000 feet. Airspace is routinely available between 1,000 and 10,000 feet and the Air Force can obtain access to temporary reserved airspace above 10,000 feet,which is allocated to military training flights. The ability to train below 1,000 feet and above 10,000 feet is important because pilots are likely to engage in combat at both low and high altitudes. In addition, flying is limited to the hours between 7 a.m. and 11 p.m. Pilots are also prohibited from flying at supersonic speeds and employing chaffs and flares. The tactical ranges in Germany are limited in that only eight aircraft at a time can use them; this does not allow the pilots to train in a realistic formation. Air space restrictions in Italy are a major challenge for the Air Force wing located in Aviano. The wing does not have permanent air space for air-to- air training in Italy. Currently, the wing uses a number of small air spaces over the base and airspace over the Adriatic Sea; however, there is no binding agreement for continued use of this space. Since 1993, the Italian government has limited U.S. military aviation forces in Italy, including both the Air Force and the Army, to 44 sorties per day. According to wing personnel, it is impossible for the Air Force to meet all annual training events within the 44 sorties per day restriction. In addition to sortie limitations, the Air Force is faced with additional restrictions, such as restricted flying hours, which make it difficult to complete night training requirements; hot pit refueling, that is, refueling while the pilot is in the cockpit and the engine is running; employing chaffs and flares; and flying at low altitudes. For the Air Force, airspace in the United Kingdom is very congested and has restrictions on high altitude and supersonic training, both of which are necessary for pilots to accomplish prescribed air-to-air attacks. Limited night flying hours restrict pilots’ ability to accomplish night vision training events. Pilots have limited radio frequencies in which they can operate their electronic equipment. The only air space dedicated for unrestricted air-to-air training—including the ability to fly supersonic, employ chaff/flares, and fly at unlimited altitudes—is at an Air-Combat Maneuvering-Instrumentation (ACMI) range over the North Sea operated by a private contractor. See appendix II figure 9 for the location of the North Sea ACMI range. To gain access to this range, the United States must have a contract that allows it to buy training slots; however, this contract lapsed after fiscal year 2001 because of a lack of funding. Electronic warfare training is also a challenge for the Air Force wing. It does not have access to electronic warfare ranges where it can fly against threat emitters and regularly be exposed to reacting to aircraft system alerts. Lastly, the lack of radio frequencies for uses such as communicating while training and transmitting training telemetry to ground stations is an issue in the United Kingdom and throughout Europe. However, the United Kingdom and Italy have now approved frequencies for the Air Force in Europe’s rangeless training technology although there is still a lack of radio frequencies for communicating while training In Korea, the ranges used by the Air Force at Koon-ni and Pilsung have several restrictions. Both ranges do not allow flying after 10 p.m. This makes it extremely difficult for pilots to meet night-flying requirements during the summer months. In addition, the physical locations of the ranges restrict the approaches that aircraft can use to enter the ranges and the angles of attack used to engage targets. In both these locations, the airspace has become increasingly congested over time. The construction of the Inchon commercial airport near Seoul and its expected traffic growth will have a negative impact on airspace availability for training at Koon-ni Range in the future. In Japan, Air Force and Naval aviators are unable to successfully complete training at home station. The size and capabilities of the Ripsaw Range in northern Japan do not support training for the Air Force wing’s mission,which is suppression of enemy air defenses. The range has only two emitters, and the physical size of the range and airspace will not permit additional emitters. Further, frequency bands are extremely restricted in Japan and additional frequency approval would be very difficult even if the available range space would accommodate adding more emitters. Consequently, while the size and capability of this range has not changed, the wing’s mission changed, rendering this range ineffective for current training requirements. For the Navy, prior to 1992, night landing practice was conducted at Atsugi Naval Air Field. However, due to noise complaints generated by the increased population from residential development that abuts the Air Field fence, in 1992 routine landing practice was discontinued at Atsugi. The interim solution has been to have the pilots use Iwo Jima, 674 nautical miles away. The base commander can get approval for night landing practice at Atsugi only if weather prohibits the use of Iwo Jima or if an emergency arises that requires the wing to deploy quickly. Furthermore, because the airspace around Atsugi has become extremely congested, landing patterns cannot be practiced to standard. In addition to constraints on mainland Japan, airspace on Okinawa is restricted, creating difficulties for Air Force and Marine Corps pilots. According to Air Force personnel, there is no electronic warfare training capability on the island. The closest range with electronic emitters is the previously discussed Ripsaw Range in northern Japan. Low altitude flying (below 1,000 feet) is prohibited over Okinawa. Good neighbor policies limit flying to between 6 a.m. and 10 p.m. Restrictions imposed to accommodate civilian air traffic have dramatically increased, and Marine Corps officials told us that as a result they cannot fly low altitude air defense missions effectively. Training constraints have a variety of adverse effects. These include (1) requiring workarounds—adjustments to the training events—that sometimes breed bad habits that could affect performance in combat, (2) requiring military personnel to be away from home more often, and (3) in some instances preventing training from being accomplished. Sometimes workarounds lack realism, and the procedures used during the workaround could lead to individuals practicing tactics that may be contrary to what would be used in combat. While all units have to deploy to obtain some of their higher-level combined arms training skills, we found that all non-CONUS units had to deploy to complete training that normally is performed at home station by CONUS units. While deployments allow the units to complete a great deal more of their training, they result in increased costs and more time away from home. Even with these actions, units are not always able to accomplish required training or accomplish the training to such a limited extent that it just minimally satisfies the requirement. However, the adverse effects of training constraints are often not being captured in readiness reporting. Units employ workarounds to mitigate home-station training limitations. Although workarounds are preferable to forgoing the training, they often result in training that is of lower quality or that creates “negative” training. Negative training is practicing procedures in a manner inconsistent with how an action would be performed in combat, which results in developing bad habits. In Europe, in some instances the Army adapts maneuver training to fit the land available and the Air Force flies unrealistic air-to- ground attack training missions. In the Pacific, the Air Force must perform workarounds in Korea and Japan. These workarounds include delaying weapons arming when approaching the training ranges and using substitute signals to replicate threat emitters. Following are examples of such workarounds. In Italy, one of the Army’s local training areas is not large enough or wooded enough to accomplish its required training. For the unit to perform its required flanking maneuver, it does so in pieces so that the land will accommodate the event. To train on what to do after making contact with the enemy, soldiers told us that a member of the unit would hide behind a pile of sandbags in an open field. The other members move through the open field and at some point the hidden solider playing the role of the enemy initiates contact for the unit to react. This workaround does not provide realistic training, because there is only one possible place the “enemy” can be. Army officials based in Italy said that this local training area is not the preferred place for units to conduct the type of training described and that other training areas are available and used between 150 and 220 times per year. Air Force pilots in the United Kingdom have to both simulate air-to-ground attacks using training lasers instead of real lasers and train at different altitudes than they would likely operate at in combat. According to personnel at the fighter wing, training lasers create bad habits, especially for younger, less-experienced pilots, because the training laser has a shorter range, which does not allow for training on the longer range targeting likely in combat. In addition, flying at altitudes that are different than the altitudes likely to be used in combat affects pilots’ timing, habit patterns, situational awareness, and engagement times. For example, because air-to-air missiles have twice the range at high altitude than at low altitude, the inability to train at high altitudes does not allow pilots to practice firing missiles in a realistic combat scenario. In Korea, at the Koon-ni range pilots have to delay arming their weapons until final approach. According to Air Force personnel this is negative training because, in actual combat, weapons are armed well before the final approach. In Japan, to get practice against more than the two threat emitters at Ripsaw Range, pilots from the fighter wing must employ a “trick file” to fool their aircrafts’ on board electronic warfare systems to make the systems think that weather and other civilian radars are threat emitters. While this workaround enables the aircraft’s sensors to pick up the radar signals as if they were threat systems, the training is not realistic. The commercial radars are always turned on, making them easy to find. In combat situations, adversaries keep their air defense radars off as much as possible, making them much more difficult to locate. When units are unable to mitigate their training constraints with a workaround, the next course of action taken is to deploy to complete training requirements. While all units have to deploy to major training centers like the Army’s Combat Maneuver Training Center in Hohenfels, Germany to obtain some of their higher-level collective training skills, we found that all non-CONUS units had to deploy to complete training that CONUS units normally conduct at home station. Non-CONUS units deploy to other locations within the country in which they are stationed; in Alaska and Hawaii to training facilities elsewhere in those states; to other countries within their theaters; or back to the United States to complete training. While deployments allow the units to complete a great deal more of their training, they result in increased costs and more time away from home, although both DOD and the Congress are trying to reduce time away from home. Data we collected from each of the military services’ commands in Europe and the Pacific show that in many cases when an entire country’s training facilities (including both U.S. and host-country-operated facilities) are considered, or in the case of Alaska and Hawaii all facilities in those states, units are able to meet many of their training requirements. Since some facilities are not located near where units are stationed, Army and Marine Corps ground maneuver units and some Navy aviation units and ships must deploy to training facilities elsewhere in the country or state in which the unit is based and sometimes to other locations in their theater of operations. Air Force wings, except those in Korea, must deploy outside the country or state in which they are based to complete their training. The following is a discussion by service of overall training capabilities. Tables 1-4 show each service’s training capabilities and how well the Commands believe their training facilities in that country or state satisfy their training needs. At our request, the service commands graded their locations on a high, medium, or low scale. High (H) denotes that units can fully satisfy or satisfy a vast majority of the capability; moderate (M) denotes that most of the capability can be satisfied; and low (L) denotes that very few to none of the training requirements can be satisfied in country or within the state. Because each service has different training requirements, the capabilities being rated vary. As shown in table 1, Army units can meet most training needs in country or state. Army units mainly deploy within country or state to obtain maneuver, major gunnery, and combined arms live fire training at the company level or higher. Army units in Germany deploy to Grafenwoehr and Hohenfels training areas on an average of 28 days per year to accomplish this training. Army units in Italy deploy to Grafenwoehr twice a year for about one month and to Hohenfels once a year for about 25 days to accomplish this training. In Korea, Army forces do not deploy away from Korea for training because of their mission. However, units have always had to deploy to larger training areas within country to complete necessary maneuver training. For example, each of the five armor and mechanized battalions in Korea deploy on average about 7 weeks each calendar year for maneuver training and, in total, the division’s four aviation battalions deploy for training on average about 2-1/2 weeks each calendar year. Army units in both Hawaii and Alaska deploy within their respective states to accomplish their training requirements. This is particularly true for live-fire combined-arms training. There are no Army combat units permanently stationed in Japan. As shown in table 2, Marine Corps units’ ability to meet training requirements are more limited than the Army’s. Units must deploy to achieve most of their combined-arms live-fire training requirements. In Japan, on the island of Okinawa, Marine Corps training is largely limited to small arms live-fire and maneuver training at company level and below. Units must deploy off Okinawa to maintain basic skill training. Since 1996, to conduct artillery live fire training, four times a year 150 to 700 Marines stationed on Okinawa deploy to the Japanese mainland for 30 days. Live- fire and maneuver training above the platoon and squad level and any integrated combined-arms live-fire training involving coordinated air and ground assault, also must be conducted away from Okinawa. For each of these training exercises, about 1,000 sailors and marines deploy for 40 days. In Hawaii, Marine Corps forces on Oahu must deploy to the Army’s Pohakuloa Training Area on the island of Hawaii, about 200 miles from Oahu, to conduct combined air and ground task-force training. Each deployment lasts between 25 and 30 days and involves a maximum of 2,100 Marines. Prior to September 1998, the Marines would have conducted most of this training at the Army’s Makua military training area on Oahu, lessening both deployment days and cost. Principally because of transportation costs, the Marines estimate it costs $500,000 more per year to train at Pohakuloa than it does to train at Makua. There are no Marine Corps combat units permanently stationed in Europe. As shown in table 3, Navy units have limited ability to meet training requirements in Japan, including Okinawa. Deployments are often needed to drop live ordnance, obtain proper electronic warfare training, fly at low altitudes, or to participate in combined air and ground forces training. For example, in Japan the carrier wing stationed at Atsugi Naval Air Field in the Tokyo suburbs deploys to maintain certification and qualification for aircraft carrier landings. Since 1992, aircrews have had to deploy to Iwo Jima, about 674 nautical miles from Atsugi, 2 to 3 times per year for this training. It requires between 350 and 500 personnel for a 10-day period to accomplish this training, which must be done prior to each carrier deployment. Because of its remote location and lack of an alternate emergency airfield, practicing carrier landings at Iwo Jima requires a safety waiver. In addition, these aircrews must also deploy to complete air- to-ground warfare training by either going to a target island near Okinawa, nearly 950 nautical miles away, or to Farallon de Medinilla, which is nearly 1,400 miles from Atsugi. For electronic warfare training, Navy aircrews stationed in Japan usually deploy to Pilsung Range in Korea, nearly 650 miles from Atsugi. During our visit to Japan, naval aviators said that it was not uncommon for them to deploy in excess of 200 days per year. There are no ships or carrier air wings permanently stationed in Europe. As shown in table 4, other than in Korea and Alaska, Air Force units have limited ability to train in the locations in which they are stationed. Many units must deploy to the United States to fulfill their live ordnance, electronic warfare, and low altitude flying requirements. For example, they deploy to the United States to participate in combined air and ground forces training such as Red Flag exercises and to participate in weapons testing exercises. The Air Force wing in Italy relies on deployments to Red Flag and weapons testing and delivery exercises to accomplish required training such as air-to-ground attacks, munitions employment, and low altitude flying because they do not have access to an air-to-ground range. In contrast, for CONUS-based units Red Flag is the culmination of training, not an opportunity to obtain training not available at home station. The Air Force wing in the United Kingdom also deploys to the United States for live fire training using laser-guided bombs and to engage in air-to-ground training on tactical ranges. Additionally, United Kingdom based units rely on deployments to Red Flag exercises or weapons system evaluation programs to complete their electronic warfare training. The Air Force in Europe discontinued use of a joint British and U.S. electronic warfare training range, Spadeadam, in October 2000, and the range is currently available on a pay-as-you-use basis. As a result of the cost, the fighter wing did not utilize this range during fiscal year 2001. Furthermore, a second option, the electronic warfare training range available in Germany, is not utilized on a routine basis because the distance from the United Kingdom to the range requires tanker support to train there, which increases training cost. In Japan, the wing stationed at Okinawa, as in the United Kingdom, doesn’t have access to an electronic warfare range. This wing deploys to Ripsaw Range in northern Japan or to Pilsung Range in Korea to perform electronic warfare training. There are no active duty Air Force combat units stationed in Hawaii. In some instances certain types of training cannot be completed notwithstanding service efforts. Specifically, the Air Force in both Europe and the Pacific and the Navy in the Pacific are unable to complete all their required training events. Following are examples of training that cannot be completed. For the Air Force, individual units report to their command what types of training they were unable to accomplish in an internal document called their “End of Fiscal Year Training Shortfalls Report.” The fighter wing in Italy reported that it could not complete its basic surface attack or night close-air-support training and the fighter wing in the United Kingdom reported that it could not accomplish all of its required night flying or electronic combat air-to-ground deliveries in fiscal year 2001. In Korea, fighter squadrons reported that they could not satisfy their night-flying requirements because aircraft are not allowed to fly with their wing lights off. This lowers combat capability because during training it is impossible for pilots to avoid looking at anti-collision or navigation lights, which would not be available during combat. In Japan, the wing stationed on Okinawa is unable to complete its electronic warfare or low altitude training requirements because there is no electronic warfare range near Okinawa and because low altitude overland flights are not permitted on Okinawa. In Japan, five U.S. surface ships stationed in Japan are unable to complete their training requirements because they cannot fire the rolling airframe missile. This adversely affects their readiness. The targets used to qualify this missile cannot be launched and controlled from sites on Okinawa or elsewhere in Japan. According to Pacific Fleet officials, they arranged for alternate targets and the ships needing to fire the Rolling Airframe Missile did so at Farallon de Medinilla and Okinawa in March 2002. Now that this is done, Pacific Fleet officials expect these ships’ readiness to increase. The ships are to maintain their currency through simulation. Our review of unit readiness assessments for almost all non-CONUS combat units in Europe and the Pacific for the last 2 fiscal years showed that most units consistently reported high levels of training readiness. The impact of limitations and restrictions on training readiness were rarely reflected in unit-readiness reports. However, individual services may report these limitations in other ways. Each month, or whenever a change in readiness occurs, units report their readiness status through DOD’s primary readiness reporting system, the Global Status of Resources and Training System. Units report their status in four resource areas, one of which is training. A unit’s training readiness status is determined by the present level of training of assigned personnel as compared to the standards for a fully trained unit as defined by joint and service directives. We analyzed monthly Global Status of Resources and Training System data for fiscal years 2000 and 2001 to see how often non-CONUS combat units were reporting training readiness at high levels and lower levels. Our analysis included units from the Army divisions and Air Force fighter squadrons in Europe and the Pacific, and selected non-CONUS Navy and Marine Corps units in the Pacific. For the units that reported low training readiness, we examined the specific reasons cited for the lowered training readiness and also reviewed the commanders’ comments to ascertain whether they attributed any of their training readiness shortfalls to training range or host country restrictions. Anytime a unit is not at level one, it must identify the reason why, and the readiness reporting instruction provides a list of reasons for commanders to choose. There is a reason in the instructions for identifying problems caused by inadequate training areas. In addition, commanders may submit their own remarks on any subject. Our analysis of unit-readiness reports of combat forces stationed in Europe and for most combat forces stationed in the Pacific showed that during fiscal years 2000 and 2001 these forces rarely reported low combat readiness. In the Pacific, with the exception of U.S. naval forces stationed in Japan, forces rarely reported low training-readiness. Units from both theaters that did report low training-readiness rarely attributed the degradation to inadequate training areas. Rather, other factors were cited such as personnel shortages or operational commitments. Further, in those instances in which Air Force units reported low training-readiness, Air Force commanders’ never cited training area limitations or host country restrictions as contributing factors to their low training-readiness. Army and Marine Corps commanders did cite training area limitations or host country restrictions as contributing factors, but only infrequently. Naval forces stationed in Japan reported low training readiness more often than other forces, but still only a small proportion of the time. Inadequate training areas or ranges were the third most frequently cited reason for the degraded training readiness. Further, when commenting on their units’ low training status, commanders of these units often cited the inadequacy of the ranges available to them and other restrictions that limited their ability to train. For example, one unit commander commented that the inability of his fighters to carry live munitions out of Atsugi Naval Air Field was a contributing factor to his lowered training readiness. The limitations of the Global Status of Resources and Training System are well known in DOD. For the most part, military officials in both theaters and office of the secretary of defense officials told us that the unit readiness report is subjective and is not a vehicle to report training shortfalls and the associated limitations or restrictions. Officials within the office of the secretary of defense also noted that the reporting system does not function as a detailed management information system objectively counting all conceivable variables regarding personnel, training, and logistics. Rather, we were told that it asks commanders to report on whether or not their units are combat ready or could be combat ready in a comparatively short period of time. However, as noted earlier, the readiness reporting system contains what are called reason codes to indicate the cause of lower reported readiness. These reason codes include inadequate training areas. There is no overall training shortfalls report that would inform senior DOD leadership of a units’ inability to obtain required training. However, individual Air Force units report to their command what types of training they were unable to accomplish and why they were limited in what is called their End of Fiscal Year Training Shortfalls Report. The Army has recently revised its training readiness reporting instructions to make the reporting more objective and the Marine Corps has an initiative underway to improve the accuracy, objectivity, and uniformity of its training readiness reporting, but there are no DOD-wide initiatives to make such improvements. U.S. military commands and services are taking a variety of actions to address constraints, including (1) negotiating with host governments to lessen restrictions on existing training areas; (2) seeking to work with other countries to create additional training opportunities, such as expanding bilateral exercises to include training that can no longer be conducted at home station; and (3) using technology to create, among other things, transportable training systems designed for training outside the usual training areas. The regional military commands do not have a unified, coordinated strategy for coordinating efforts to improve training that could prevent the individual services from pursuing solutions to their training shortfalls that are unintentionally detrimental to other services or that unintentionally sacrifice some training capabilities to improve others. In most cases, individual services or unit commanders are working with host countries to lessen restrictions. This results in individual solutions rather than a set of coordinated actions that sometimes adversely affect other services or training capabilities. The following are examples of various alternatives and their effects. Both Army and Air Force officials in Italy have a very positive working relationship with their Italian counterparts and the U.S. Embassy’s Office of Defense Cooperation. The Air Force is currently working with them to relax the restriction on the number of sorties allowed per day. The Air Force is restricted to 44 sorties-per-day, which makes it very difficult to accomplish training especially after aircraft were added to the wing. The Air Force is negotiating to increase sorties to 63 per day. U.S. Army helicopters stationed in Italy are restricted to 12 sorties per day and on a weekly basis only 15 of these sorties can be low altitude. The Army needs several helicopters to take-off and land multiple times to execute a training mission, which it views as a single sortie, while under the agreement the Italian government counts each helicopter on the mission as a single sortie. This restriction as currently defined by the Italian government may limit Army helicopters to no more than 1 day of effective training per week. Army personnel said that there was a miscommunication between the Air Force and the Army about the definition of sortie during the initial negotiations. In other European countries with long-standing training constraints, actions have been taken to resolve issues. In these cases, the services worked closely with the governments and militaries to address new issues as they surfaced, such as the impact of the foot and mouth disease in the United Kingdom in 2001. In some instances, certain restrictions are the result of political agreements and cannot be opposed. An example of this is the low-altitude training restriction of 1,000 feet above ground level that Chancellor Kohl of Germany and President George H. W. Bush agreed upon. Air Force pilots at Misawa Air Base in northern Japan are allowed to use a nearby air base operated by the Japanese Air Self Defense Force when they have to divert their F-16s because of inclement weather. Ideally, the pilots should practice such landings at the air base before they need to use it in an emergency. However, they are unable to practice because of an agreement reached prior to 1985 by local Japanese military officials and a local U.S. Navy official when Misawa was a U.S. Navy installation. Under the agreement, Navy P-3 aircraft were allowed to practice such landings at the air base, but U.S. fighter aircraft could land there only in an emergency. At the time, the Navy had no fighter aircraft at Misawa, and the limitation did not seem significant. In Korea, U.S. military officials and American embassy personnel are working with their host government counterparts in a coordinated effort to, among other things, lessen training restrictions and remove residential and commercial development from critical training areas. According to U.S. military officials in Korea, the resulting Land Partnership Plan was designed to consider the needs of all the services because previously some local commanders had made agreements that met their short-term needs but ultimately sacrificed broader, more long-term U.S. military interests. Under the plan, the United States is to return about 33,000 acres of land it currently uses and reduce its major installations from 41 to 26. In exchange, Korean civilian housing, farming, and commercial buildings are to be removed from the remaining U.S. installations and training areas. The United States is also to receive greater access to Korean-owned-and- operated training areas and ranges. The plan is to be phased in over a 10-year period. The plan has been completed and is awaiting final United States and Korean government approval. If implementation does not begin soon, U.S. Forces Korea estimates that its forces will face training- readiness shortfalls by 2003. Army officials in Hawaii recently negotiated with local groups the reopening of the Makua training area on the island of Oahu. The agreement provides training opportunities that satisfy some of the Army’s requirements. However, the Army did not include the Marine Corps in the negotiation. According to Army officials in the Pacific, the Army did attempt to include provisions for Marine Corps training requirements in negotiating with the lawsuit plaintiffs, but were unable to reach an agreement that would provide specific training opportunities for the Marine Corps. These Marine units are heavily dependent on Army operated training ranges to meet a sizable portion of their training needs, most notably training for company-level and higher exercises that involve live-fire and combined-arms. Thus, for at least the next three years, Marine units must continue deploying to another training area. This increases time away from home and cost. The theater commands and their service components are working with countries throughout their theaters to develop additional training opportunities. The following are examples of these successful efforts and the problems and drawbacks that they sometimes create. The Army in Europe is working with eastern European countries to develop training opportunities. For example, in 2000 and 2001, the Army conducted a live-fire and combined-arms exercise in Poland called Victory Strike. According to Army in Europe officials, the exercise allowed them to practice against real world systems and meet training standards by taking advantage of the location, opportunity, time, and space of the Poland ranges. This exercise also allowed the Army to accomplish training that it would not have been able to perform in Germany. The Air Force in Europe is working with countries throughout the European theater—including countries in north Africa, such as Tunisia, and new NATO nations, such as Slovakia and Bulgaria—to negotiate developing training ranges or opportunities. It is also coordinating with the Navy in Europe to develop possible joint-use and jointly funded training- range opportunities in Croatia and Slovenia. Further, the services are trying to gain access to training ranges in countries where U.S. forces do not train now, such as the Czech Republic and Croatia. According to personnel in some units we visited, units have little input into the design of joint training exercises. While a joint exercise may provide great training for one U.S. service, it may provide little value for another. For example, Air Force personnel stated that the Victory Strike exercise in Poland was not adequately coordinated to maximize their involvement. During the first part, they were not able to communicate with other participants, and they never performed the close air support role that they thought they were there to perform. The U.S. Pacific Command supports a number of training exercises with allied and friendly countries in the region. The exercises include Tandem Thrust, a bi-annual bilateral exercise with Australia; Cobra Gold, an annual bilateral exercise with Thailand; and Balikatan, a joint exercise with the Philippines. They provide U.S. forces with access to training areas that (1) permit integrated and combined-arms training that would be difficult to accomplish using only existing U.S.-controlled ranges and training areas and (2) are less restricted than the areas used at their home station. Relying on such exercises does have drawbacks. When foreign ranges are used, in deference to host governments and other participants, U.S. forces may not be able to conduct the training in a manner that would provide the quality of training U.S. forces would conduct on their own ranges. According to U.S. Pacific Command and Marine Forces Pacific officials, a few of the exercises had little value because they were basically having to train their foreign hosts on U.S. tactics and were unable to train at a level needed to accomplish their desired goals. In addition, if U.S. forces must devote time during exercises for training they would typically conduct at home station, they may not conduct as much of the higher-level training needed or conduct it as effectively. Eliminating certain training restrictions is impossible; and the services are looking to technology, such as simulation training, to possibly provide training that non-CONUS units cannot obtain. Technologies currently exist in the European theater to provide training for individual weapons systems and equipment, such as F-15s, tanks, and Bradley Fighting Vehicles. In the Pacific theater, the use of technology, including simulation, is essential to ensure that U.S. military forces are able to maintain their combat readiness. Training simulators for Europe-based units are available at major training facilities, such as Grafenwoehr, and some home stations. With these additional home-station training options, the units do not have to deploy as frequently. However, the use of technology for training has caused other problems, inadvertent and age- related. Following are examples of the non-CONUS use of technology for training and its effect. The Air Force in Europe acquired a rangeless training system called U.S. Air Force Europe Rangeless Interim Training System to allow flexibility in how it uses available airspace for training. Before the system was acquired, aircrews had to train on an instrumented range in order to receive feedback from their training. With the system, aircrews can train in available air space and receive feedback from devices installed in their aircraft. In theory, the new system should make quality air-to-air training easier to accomplish despite the increasing restrictions on available air space. However, this is not the case for the F-15C squadron in the United Kingdom. The Air Force in Europe acquired the system for the F-15Cs in the United Kingdom and terminated the contract for the existing range, which was the best air space available for air-to-air training. Now, actual air-to-air training is more difficult for that squadron to accomplish because of the lack of quality air space. Air Force in Europe officials said that they were unaware that quality air space would be more difficult to schedule when they terminated the existing range contract. In Germany, many local training areas are not sufficient for tank maneuvering. The simulator provides an opportunity for solders to become familiar with the procedures while they are at home station. However, units we spoke with said that the simulation available at home station is old and rarely operating. According to Army in Europe officials, they plan on having these replaced. A mobile trainer is to be fielded in fiscal year 2005. In Korea, the Army will be highly dependent upon technology in the form of simulators, such as for tank gunnery; instrumentation systems; and a variety of other systems that are being fielded Army-wide. Using such technology, Army officials will be able to improve their training capabilities for large-unit maneuvers. Additionally, the Army uses portable target systems on Korean ranges to achieve training to U.S. standards. The portable systems will become even more important as the Army forces in Korea expand their previously discussed use of Korean-controlled training areas and ranges. In Japan, on Okinawa, an example of technology-based systems includes a portable air-combat maneuvering system known as the Kadena Interim Training System. The system—a pod fitted to the aircraft’s wing—is designed to improve the quality of fighter air-to-air training and is “rangeless.” It does not need ground-based instrumentation to function and is not dependent on having a fixed range. The system was first deployed at Kadena Air Base on Okinawa, but the Air Force has started deploying additional systems to Osan Air Base in Korea, and it expects to deploy the system to Misawa Air Base in Japan later in 2002. According to officials from Headquarters, U.S. Pacific Fleet in Honolulu, the Navy is also developing a portable air combat maneuvering system for its fighter aircraft and plans to fund the system in 2004. On Okinawa, the Marine Corps currently use marksmanship trainers. The Marines said that they are scheduled to receive three additional training simulators: staff trainers to train Marines in the use of command and control systems; gunnery and tactical trainers for light armored vehicles; and supporting arms call-for- fire trainers. In Japan, the Navy also wants to fund the use of portable antisubmarine warfare ranges and use simulators to maintain currency for the Rolling Airframe Missile as was mentioned earlier. In Hawaii, the Pacific Missile Range Facility has developed a computer- simulated target “island” to enable surface ships to do naval surface fire support training. With the exception of Korea, the regional commands do not have a coordinated strategy for pursuing actions to mitigate training limitations. The norm is for individual services to negotiate solutions for their individual training constraints. In the case of Japan, U.S. Embassy officials in Japan told us that individual service efforts were the recommended course of action because local service representatives were the most knowledgeable about their issues and should be the ones to resolve them. However, as discussed earlier, a lack of coordination has at times unintentionally been detrimental to another service. For example, we previously described an instance in Japan where a local Navy official negotiated practicing landings at a Japanese airfield that resolved a Navy constraint but did not consider future needs. In the case of Korea, U.S. Forces Korea officials told us that the previously described Land Partnership Plan was designed to consider the needs of all the services because arrangements made in the past by local commanders sometimes sacrificed broader, more long-term military interests. In addition, when the regional commands or an individual service arrange bilateral and multilateral training exercises, they do not always allow all the other military service participants input into the design of the exercise. This lack of coordination has at times not maximized all the services’ involvement. As we discussed earlier, this was the case for the Air Force in its participation in an Army exercise in Poland called Victory Strike. Even though units we visited told us about numerous constraints on their ability to complete required training, units have rarely reported degraded training readiness. This practice undermines the usefulness of readiness reporting. Also, at present, there is no consolidated listing of training constraints for non-CONUS locations. Therefore, senior DOD leadership, such as the Senior Readiness Oversight Council, which monitors the readiness of U.S. military forces, as well as service leadership above the affected commands in Europe and the Pacific, cannot be aware of the extent of training constraints faced by non-CONUS units. Military services and regional commands are taking a variety of steps to mitigate constraints and increase training opportunities without a coordinated strategy that assures that actions taken by one party do not adversely affect another. Our work shows that actions taken by one part of DOD can in fact adversely affect other parts of DOD. First, individual services, and not regional commands, are pursuing solutions to their training shortfalls with host governments—solutions that may inadvertently be detrimental to other services. Second, commands do not always allow the services much, if any, input into structuring bilateral and multilateral training events. Without their input, training exercises may not focus on obtaining some required training and can unnecessarily favor one service over another. Third, when DOD acquires new technology to improve training capabilities, it is not considering all factors of the training environment and is thus sacrificing some training capabilities to improve others. We recommend that the secretary of defense direct the chiefs of the military services in conjunction with the undersecretary of defense, Personnel and Readiness, to develop a report that will accurately capture training shortfalls for senior DOD leadership. This document should objectively report a unit’s ability to achieve its training requirements. It should include all instances in which training cannot occur as scheduled due to constraints imposed by entities outside DOD as well as all instances when training substitutes are not sufficient to meet training requirements, a discussion of how training constraints affect the ability of units to meet training requirements and how the inability to meet those requirements is affecting readiness, and a description of efforts to capture training shortfalls in existing as well as developmental readiness reporting systems. We further recommend that the secretary of defense direct that the war fighting commands, in concert with their service component commands, develop an overarching strategy that will detail the initiatives the command and each service plan to pursue to improve training, such as access to additional host government facilities, participation in bilateral and multilateral exercises, and acquisition of new technology. This strategy needs to be vetted throughout the services to ensure that all factors are taken into consideration and that actions taken to improve training opportunities for one service are not made to the detriment of another service’s ability to train or that training capabilities are not lost unintentionally. In written comments on a draft of this report, DOD stated that it concurred with the content of the report and its recommendations. DOD suggested that our recommendation on reporting training shortfalls be expanded (1) to include both active and reserve training shortfalls and (2) to specify in greater detail what the recommended report should address. Regarding the inclusion of both active and reserve training shortfalls in our recommendation, we agree that conceptually this has merit, but because we did not examine reserve forces’ training shortfalls, we are not in a position to include them in our recommendation. We have, however, expanded this recommendation to identify some topics that reporting on training shortfalls should include. These topics are not meant to be all- inclusive because DOD is in a better position than we to determine exactly what to report. In responding to our recommendation that an overarching strategy be developed to detail initiatives being pursued to improve training, DOD stated that such an effort should help generate a variety of options to ameliorate the current training deficiencies. DOD’s comments are reprinted in their entirety in appendix IV. We are sending copies of this report to the secretary of defense; the secretary of the army, the secretary of the air force, the secretary of the navy, the commandant of the Marine Corps, and the director, Office of Management and Budget. We will also make copies available to others upon request. If you have any questions, please call me on (757) 552-8100. Key contributors to this report were Steve Sternlieb, Laura Durland, Frank Smith, and Lori Adams. In Europe, as shown in figure 5, Army and Air Force units are primarily stationed in Germany, Italy, and the United Kingdom. The Army in Europe has two divisions, the First Infantry Division headquartered at Wuerzburg, Germany, and the First Armored Division headquartered at Wiesbaden, Germany. In addition, the Army’s Southern European Task Force is stationed at Vicenza, Italy. The Air Force has three fighter wings in Europe. The 48th Fighter Wing at Lakenheath Air Base, United Kingdom is comprised of F-15Cs and F-15Es; the 52nd Fighter Wing at Spangdahlem Air Base, Germany is comprised of A-10s and F-16s, and the 31st Fighter Wing located at Aviano, Italy, has F-16s. In the Pacific, as shown in figure 6, the Army, Air Force, Navy, and Marine Corps have combat units stationed in Japan and Korea. The Army’s 2nd Infantry Division is stationed at Uijongbu, Korea. The Air Force has the 18th Wing at Kadena Air Base on Okinawa, whose fighter aircraft is the F-15Cs. The 35th Fighter Wing at Misawa Air Base in Japan has F-16CJs. In Korea, the 51st Fighter Wing at Osan Air Base has A-10s and F-16s, and the 8th Fighter Wing at Kunsan Air Base has F-16s. In Japan, the 7th Fleet is headquartered at Yokosuka Naval Base; however, there are ships at both Yokosuka and Sasebo Naval Bases. In addition, the Navy has Carrier Air Wing 5 located at Atsugi Naval Air Field, Japan. The Marine Corps’ III Marine Expeditionary Force, comprised of the Headquarters, 3rd Marine Division, 1st Marine Aircraft Wing, and 3rd Force Service Support Group is stationed on Okinawa. To determine the types of training constraints faced by non-CONUS-based units and whether they are likely to increase in the future, we interviewed officials at all levels in DOD from the office of secretary of defense, Personnel and Readiness, to unit-level service representatives from all services in both the European and Pacific theaters. We obtained documentation detailing training shortfalls where available. We conducted interviews with component command representatives from each of the services in both the European and Pacific theaters and headquarters personnel within each service responsible for training range programs. To aid us in systematically collecting country-wide training range capabilities for each service, we developed a training-capabilities data collection table that we asked each of the services’ subordinate commands to fill out on how well they were able to meet their training requirements. We included these tables on pages 20-25. We conducted our work in the five major countries in which U.S. forces are stationed-Germany, Italy, Japan, South Korea, the United Kingdom, and the state of Hawaii. We visited a variety of training areas in each location. We did not conduct work involving Vieques, Puerto Rico, because our focus was Europe and the Pacific and the training constraints involving Vieques are well known. Table 5 depicts all the major units and training locations we visited. To determine the impact that training constraints are having on the units’ ability to meet their requirements, we obtained information on such impacts from unit level service representatives from all services in both the European and Pacific theaters. In doing so, where training was not accomplished, we discussed if these shortfalls translated into readiness reporting. To independently assess the impact of training constraints on reported readiness, we obtained and analyzed reported readiness data for the European and Pacific theaters for fiscal years 2000 and 2001 to determine if units had reported any diminished readiness as a result of training limitations. To determine what alternatives were being pursued by the services to overcome their training shortfalls, we interviewed unit-level and component-command representatives from all services in both the European and Pacific theaters. They provided us data and documentation on what initiatives they are pursuing to alleviate training limitations. We also interviewed embassy representatives from the defense attachés’ offices in each of the previously mentioned countries that we visited except Korea to determine what role they play in addressing training limitations. We conducted our review from June 2001 through February 2002 in accordance with generally accepted government auditing standards.
Rigorous, realistic training is key to military readiness. All U.S. military forces conduct frequent training exercises to hone and maintain their war-fighting skills. Combat units stationed outside the continental United States are able to meet many of their training requirements but face constraints in such areas as (1) maneuver operations, (2) live ordnance practice, and (3) night and low altitude flying. Training constraints cause adverse effects, including (1) requiring workarounds that can breed bad habits affecting combat performance; (2) requiring military personnel to be away from home more often; and (3) preventing training from being accomplished. To address these concerns, military commands and services are negotiating with host governments to lessen restrictions on existing training areas, but such actions are often done at an individual-service level and sometimes create unforeseen problems for other services and for existing training capabilities.
SCA was enacted to give labor standards protection to employees of contractors and subcontractors providing services to federal agencies in the United States. SCA requires that, for contracts exceeding $2,500, contractors pay their employees, at a minimum, the wage rates and fringe benefits that have been determined by DOL to be prevailing in the locality where the contracted work is performed. The types of service jobs covered by the act include, for example, security guard services, food service, maintenance, janitorial services, clerical workers, and certain health and technical occupations. Until recently, DOL regulations required that federal contracting agencies complete and submit a form to DOL indicating their intention to offer a service contract and requesting current wage and benefit determinations for the occupational class(es) and geographic area(s) involved in the contract. Since the mid-1990s, however, some contracting agencies have been able to obtain wage determinations through a DOL online wage determination database, rather than requesting one from DOL. Many of their covered service contracts were renewals and the applicable SCA wage determinations for these contracts were already well established and posted online for information purposes. For these reasons, DOL entered into memoranda of understanding with several agencies to allow them to use posted standard wage determinations without first formally requesting a new one. On August 26, 2005, DOL issued regulations that allow all federal contracting agencies to use its www.wdol.gov Web site to meet their obligation to obtain SCA wage determinations from DOL. This final rule eliminates the required paper form when requesting a wage determination. Under SCA, WHD establishes wage rates that apply to the United States, including the District of Columbia, and certain territories. WHD issues SCA wage determinations that are location-specific, listing nearly all standard occupations on each wage determination. These wage determinations are generally referred to as “consolidated” wage determinations. WHD strives to update its list of consolidated wage determinations annually, issuing 410 consolidated wage determinations covering almost 300 standard occupations in 205 geographic locations. These consolidated wage determinations altogether, contain approximately 61,500 individual wage determinations. In addition, between August 1, 2004, and July 31, 2005, WHD issued at least 15,786 other wage determinations upon request, including those for non-standard occupations and conformance requests. See appendix II for an example of a consolidated and a nonstandard wage determination for a specific geographic locality. FPDS statistics indicate that federal service contracts continue to increase in number and total dollar volume each year. According to ESA’s fiscal year 2003 Annual Performance Plan, federal contractors and subcontractors employed nearly 25 percent of the civilian workforce— about 26 million workers—in the U.S. economy. Although the exact number of workers in the subset covered by SCA is unknown, it has been estimated that hundreds of thousands of federal service contract workers are employed annually under such contracts. WHD consults multiple wage data sources and relies on analysts’ professional judgment when making wage determinations, but the process lacks transparency and leaves wage determinations prone to criticism. When making a wage determination, WHD analysts consult several sources of information, such as its SCA directory of occupations and data collected through two BLS national wage surveys, for wage data on occupations. Relying on these tools and their own expertise, analysts calculate prevailing wages and fringe benefit amounts for specific geographic locations. Stakeholders contend that the wage determination process is not transparent and that the resulting wages do not always reflect local wage conditions. As a result, analysts spend considerable time responding to inquiries about the methodology used to determine wages. Stakeholders with these concerns, such as unions and contractors, told us that they might have fewer questions about the process if WHD made more information available. In addition, WHD last issued a comprehensive edition of its SCA directory of occupations in 1993 and has no systematic process in place for updating it. As a result, the directory does not include a broad range of emerging occupations that are covered under SCA. WHD analysts consult the SCA directory of occupations as a first step in the process of determining wages. They then consult a number of different sources of data when calculating wage rates. Finally, analysts must also include the fringe benefit rate for the specific locality in each wage determination. WHD analysts use the SCA directory of occupations, a reference tool that describes standard service occupations typically utilized in the performance of SCA-covered contracts, to develop wage determinations. The directory is not just an information document–it is a critical part of the wage determination process throughout the federal contracting system. However, the process that WHD uses to update its SCA directory of occupations is not written down and is essentially ad hoc. There are neither written procedures that describe how or when WHD updates the directory, nor a required or standard time interval for how often the directory should be updated. DOL has no systematic process for updating its SCA directory of occupations, but instead, updates it periodically. The current edition of the directory was issued in 1993. Since then, there have been three supplements to the directory. According to WHD officials, when there is a sufficient volume of smaller-scale changes proposed by stakeholders, they will issue a supplement to the directory. Stakeholders usually bring the need for supplements to WHD’s attention. Supplements can involve adding some classes of jobs as well as editing or removing others. WHD can make these changes to job classes either with or without getting stakeholder approval. A recent effort to update and release a new edition of the directory, begun in 2002, was initiated after federal contracting agencies, contractors, trade associations, and unions raised concerns that the existing directory did not meet their needs. In fact, stakeholders independently drafted an update to the directory and presented it to WHD. While WHD is not legally required to include outside parties in the update process, WHD has encouraged stakeholders to participate, allowing them to review all suggested changes. According to WHD officials, the update has been long in the making, due in part to the number of suggested changes received from and deliberated by the stakeholders. Some stakeholders, however, have expressed frustration with the length of time the update has taken. In response, one senior WHD official we spoke to explained that, in some cases, directory changes could have significant cost implications for both wages and fringe benefits at the local level and that careful consideration is necessary to make proper adjustments. Stakeholders, the official contended, may not realize the implications of the changes or additions that have been proposed. For example, an issue was raised of whether it was more appropriate to classify the occupation “truck dispatcher” as an administrative, clerical, technical, or professional position, when each category brings with it a different level of wages and benefits. Throughout the update process, several job categories and occupational classes have been added to, or deleted from, the directory. WHD analysts responded to stakeholder needs for job classifications that were not available in the directory. For example, WHD added a job classification in response to a DOD need for an “unexploded ordnance technician.” WHD worked with DOD to develop an accurate description for placement in the directory. Similarly, the job category of “detention officer” was added at the request of U.S. Citizenship and Immigration Services because of the volume of hiring and the uniqueness of the duties performed. In these cases, WHD did not involve additional agencies in the process of changing the directory. Ultimately, WHD has the authority to decide which jobs are included in the directory. Despite recent efforts to update the directory, some common service occupations are still missing. Specifically, the directory does not contain the occupations “customer service representative” or “telemarketer.” Contracting agencies that need such services performed cannot acquire the wage rate from DOL’s online wage determination system and must request a separate wage determination from a WHD analyst. In addition, WHD officials told us that analysts sometimes receive multiple wage determination requests for the same unlisted occupations, thereby increasing their workload. The directory also does not list an occupational title for “call center representative.” A contractor told us that as a result, wage determinations for call center contracts with federal agencies generally listed these occupations as “general clerk I, II, and III.” According to this contractor, the wage determination for a general clerk is usually lower than the market rate for a call center representative. The contractor pointed out that federal agencies will likely have an increased need for call center representatives in the years ahead. Some contractors told us that, while they often must pay additional amounts to meet the market rate to be able to recruit qualified workers, they cannot submit the higher rates in their bid without risking the loss of the contract to a competitor. Contractors warned that, in cases like these where they lose a contract to a lower bidder, federal agencies may be at risk of contracting with employers who will provide a lower quality of services. According to these contractors, the difference in wage rates paid to workers on SCA- covered contracts and those not working on SCA-covered contracts can lead to some workers feeling demoralized. WHD officials told us that after the current update is issued, which is expected to occur in October 2005, no plans are underway for the next update. WHD analysts rely on professional judgment when calculating wage rates. WHD provides analysts with methodology worksheets that assist them in determining a wage. These worksheets provide an outline of how an analyst should proceed when certain conditions exist (such as, when survey data are not available for a specific occupation). The worksheets are intended to guide analysts without dictating the exact determination process. More specifically, to determine a wage rate, analysts review the available wage data sources as well as previously issued wage determinations. Analysts base most wage determinations on nationwide survey data collected by BLS under the National Compensation Survey (NCS) and the Occupational Employment Statistics (OES) survey, or other data showing the rates that prevail in a specific locality. Analysts also take into account previously issued wage determinations when setting a new or revised wage rate. For example, to maintain general consistency from year to year, WHD instructs its analysts to not issue a rate lower than or more than 10 percent above the previously issued wage rate. In addition, when wages have been set by a collective bargaining agreement, analysts are required by SCA to carry over those negotiated wages to contractors who take over ongoing contracts. Finally, analysts use the union dominant rate, when applicable. After selecting a data source, analysts review the wage information for different classes of the same occupation (e.g., the different classes, I, II, and III, of the occupation “secretary” require successively more advanced skills) and the pay relationships that exist between these job classes (i.e., the different classes of secretary are paid successively more for their advanced skills), and make adjustments as needed to address data abnormalities or inconsistencies. For example, an analyst would make an adjustment if the data showed lower wages for a secretary III than for a secretary I or II. Analysts also review occupations in the same broad job category (e.g., administrative support and clerical occupations) to ensure that different occupations performing commensurate duties receive similar pay. When data for an occupation are not included in existing wage surveys, analysts can establish a prevailing wage rate through a procedure called “slotting,” which involves comparing equivalent or similar job duties and skills between surveyed classifications and other classifications for which no survey data are available. For example, analysts may adopt the rate for a “computer operator” and use it for a “peripheral equipment operator” (whose duties include taking corrective actions to return equipment that directly supports computer operations, such as printers, to proper working order) because the job duties and skills required for both classifications are rated at the same level under the grading system for federal employees. Further, when the survey lists varying wage rates for several similar occupations, such as the “general maintenance trades,” analysts will determine the average wage and use that rate as the prevailing wage for the entire group of occupations. See figure 2 for a graphic illustration of some of the factors an analyst may consider when determining a wage rate. An additional reason why analysts must rely on professional judgment when determining wage rates is that BLS’s wage surveys were not designed for the purpose of determining wages and fringe benefit rates. While the BLS surveys may provide the most comprehensive wage data available, WHD analysts must perform some manipulation of BLS’s data when calculating wage rates. As a result, WHD’s reliance upon these data may not ensure that the wage rates it sets reflect labor market conditions. For example, because the survey responses may include the wage rates for some SCA service contract workers whose rates are set by a wage determination, analysts may not be using data that fully reflect the local labor market conditions. In other words, WHD, in trying to determine the market rate for certain occupations, may be referencing survey responses of its own derived rates. However, we did not attempt to determine the extent to which BLS data includes such information. In addition, one BLS survey used by WHD excludes smaller employers with fewer than 50 employees from its sample population. As a result, the survey results could inflate or deflate actual wages for the types of occupations typically employed by smaller employers. Another reason that BLS survey data may affect WHD’s ability to set rates that reflect market conditions is that the occupational classifications in DOL’s SCA directory of occupations do not always match OES occupational classifications, making it difficult for WHD analysts to match the OES wage data to the SCA occupation without significant analysis. Because OES does not collect data for each classification for every locality surveyed, WHD must sometimes use the “slotting” procedure to derive a wage determination. In addition to a wage rate, each SCA wage determination also includes the fringe benefit rate for the specific locality. Analysts generally set a universal fringe benefit rate that employers must pay to all workers in a specific geographic area regardless of their occupational class. The fringe benefit amount typically includes health and life insurance coverage, sick leave, retirement plans—items that are typically referred to as health and welfare (H&W) benefits—as well as vacations and holidays. WHD analysts arrive at the H&W rate used in wage determinations by consulting nationwide data from BLS’s Employer Cost for Employee Compensation (ECEC) survey. In contracts awarded since new regulations became effective in June 1997, the fringe benefit rate has most often been calculated on a “fixed cost per employee” basis, where each employee receives the same benefit amount. Employers may meet their fringe benefit obligations by paying the employee the cash equivalent of the specified fringe benefits. In June 2005, the “fixed cost per employee” SCA health and welfare benefit rate was increased to $2.87 per hour, which equates to about $497 per month. The wage determination process requires analysts to apply professional judgment in selecting both the appropriate source and method for calculating the prevailing wage rate. Contractors and other stakeholders contend that the process that analysts follow when determining a wage is not transparent and that determinations do not necessarily reflect local wage conditions. In fact, WHD does not include a description of the methodology used to derive the wage rates in its wage determinations, such as the wage data source used or the procedures analysts’ follow. As a result, analysts spend considerable time responding to inquiries from contractors, employees, union representatives, and others regarding how they determine wages. According to WHD officials, analysts received about 23,000 telephone inquiries in a recent 12-month period, mostly from service contract employees who want to know how their wage rate was calculated or why their rate differed from a similar rate in a neighboring locality. Congress, unions, and others also contact WHD staff to inquire on behalf of their constituents or members. WHD assigns these inquiries to analysts as they are received. WHD officials told us that the specific methodology of calculating a wage rate for a certain occupation in a certain geographic location can change from year to year based on a series of elements, such as the availability of survey data or an analysts professional judgment. While analysts do provide details to those who inquire, WHD does not provide individual methodology worksheets in writing, stating that doing so would result in additional inquiries as to why rates are not calculated by the same method as in the previous year and take analysts away from their primary task of issuing new and revised wage determinations. Stakeholders with concerns told us, however, that it would be helpful to them if more information about the process was made available. WHD receives criticism that its wage determination rates do not reflect market conditions. Some contractors say that private-sector wage data provide a more accurate measure of local labor market conditions than BLS survey data that were not designed for the purpose of determining wages and fringe benefit rates. However, WHD officials told us that to the extent its wage rates are perceived as not reflective of the market rate, one possible reason could be that WHD sets internal parameters for wage determinations (e.g., not issuing a wage rate lower than or more than 10 percent above the previously issued rate) to ensure consistency from year to year. As a result, while BLS survey data may be lower or higher than the resulting wage determination, analysts manipulate wage rates to ensure a consistent wage structure. WHD enforces SCA by conducting contractor investigations, ensuring contractor payments to employees, and providing compliance assistance to stakeholders. WHD investigates complaints from service contract employees, contractors, federal agencies, unions, and others who allege that contractors have failed to pay either the wages or fringe benefits, or both, specified in service contracts. WHD collects violation data, but it does not fully use these data to plan compliance assistance, target specific service industries or geographic locations for SCA investigation, or set strategic enforcement goals. When investigations find that contractors have failed to pay in accordance with contract wages or benefits, WHD acts to ensure that contractor payments are made to employees. WHD also provides compliance assistance to contractors, federal agencies, unions, and others to help them comply with SCA requirements and avoid SCA violations. SCA investigations originate when contract employees, federal agencies, competitor contractors, or employee representatives complain to WHD that a contractor has failed to comply with the wage or benefit requirements in a contract. WHD investigators then consult and interview contractor officials, inspect the contract and contractor payroll records, and interview service contract employees. WHD records investigation data, such as the name of the contractor, geographic location, industry, and the type of violation, in its WHISARD database. When responding to complaints, WHD investigators review WHISARD data for prior contractor violations. WHD uses violation data on a case-by-case basis to determine whether an individual complaint warrants expansion to a more comprehensive “directed” investigation. For example, WHD may decide to expand the scope of an initial complaint to encompass other employees under the same contract, additional contractor locations, or other service contracts involving the same contractor. WHD records all alleged SCA violations in its WHISARD database and classifies investigations as either complaint or directed. WHD generates violation reports from WHISARD that summarize investigation findings. SCA violation reports for fiscal years 2003 and 2004 show that about 87 percent of all investigations during this period were classified as complaint and about 14 percent were classified as directed. Table 1 shows the number and percentage of complaint investigations and directed investigations for fiscal years 2003 and 2004. WHD headquarters and regional enforcement officials told us that a complaint-based enforcement strategy offers an efficient approach to enforcing multiple labor laws. Consequently, WHD does not analyze or use violations data from WHISARD to (a) examine the extent to which specific service industries or geographic locations may warrant increased compliance assistance or directed investigations under SCA or (b) develop SCA-specific strategic goals. Concerning the latter, while ESA’s 1999–2004 strategic plan contains specific outcome or performance goals for some labor acts, such as the Davis-Bacon Act and the Fair Labor Standards Act (FLSA), there are none for SCA. WHD has overall strategic enforcement goals that cut across all labor laws it enforces, such as improving timeliness in response to complaints and reducing the number of violators who have repeat or recurring violations. Moreover, ESA’s strategic plan uses violation data in WHISARD to focus enforcement efforts on low-wage industries reflective of employers that have previously violated labor laws, such as FLSA, minimum wage and child labor laws, and others. While the focus on low-wage industries may detect violations in some service contract industries, it does not assure that all service contract industries with serious or frequent SCA violations are identified. When a WHD investigation determines that a contractor has failed to pay wages or fringe benefits to contract employees, WHD attempts to reach agreement with the contractor regarding the amount of back wages and fringe benefits owed employees. WHD also monitors contractor activity to ensure that the amounts owed to employees are eventually paid to them. In fiscal year 2004, WHD initially investigated 654 reportable cases—cases with possible SCA violations—and ultimately found 493 cases with SCA violations that began as an SCA investigation. In addition, 44 other cases, registered by WHD under other labor acts it enforces, had SCA violations. These 537 cases, more than 80 percent of the total number of SCA investigations, uncovered $18.7 million in contractor back wages and fringe benefits that were owed to employees. WHD obtained contractor agreements to pay $16.4 million to employees. Once a contractor has reached agreement with WHD on the amount of wages and benefits owed, WHD monitors contractor payments and does not conclude the case until the contractor has made full payment. WHD treats each instance of failure to pay a contract employee the proper wage to be a separate violation of the act. Likewise, WHD considers the failure to pay that same employee the proper fringe benefit as a separate violation. Thus, a contractor who fails to pay the proper wage and the proper fringe benefit would be cited for two separate SCA violations. Figure 3 shows the total number of cases found to have SCA violations in fiscal years 2003 and 2004, differentiating those cases that were registered under other WHD acts from those that were initiated as an SCA investigation. WHD’s SCA investigations have a generally high success rate when judged by one key measure of enforcement success—the percentage of back wages and benefits that contractors agreed to pay—compared to the wages and benefits that contractors owed. WHD’s overall rate of back wages recouped has also been high. Figure 4 shows the number of employees with back wages owed them, and the number of employees whom contractors agreed to pay for fiscal years 2003 and 2004. For these two periods, contractors agreed to pay about 89 percent of unpaid wages that they were found to owe for SCA violations. WHD may debar contractors who refuse to pay back wages and fringe benefits owed to service contract employees or otherwise meet SCA and WHD conditions for debarment. WHD may also arrange with federal agencies to permit debarred contractors to complete the contract under which violations occurred, but debarred contractors may not bid on or be awarded any other federal contracts during the standard 3-year debarment period. WHD debarred 17 contractors in fiscal year 2004, in contrast with approximately 450 contractors that it investigated. Table 2 shows the number of debarments for fiscal years 2000 through 2004 by region. WHD provides compliance assistance to federal contracting agencies and contractors to help improve SCA compliance. One of WHD’s basic missions is to provide employers and workers with clear and easy-to- access information on how to comply with federal employment laws— information and guidance that are often referred to as compliance assistance. Compliance assistance includes brochures and pamphlets, workplace posters, telephone consultations, on-site consultations, training sessions or seminars for individuals or groups, and Web-based information. WHD’s Web site, for example, contains an Employment Law Guide with details about SCA coverage, requirements, employee rights, penalties, and sanctions. In fiscal year 2004, WHD provided SCA compliance assistance at national, regional, and local levels to federal agencies, contractors, and service contract employee groups. National-level training and outreach efforts included presentations, speeches and seminars for the National Industries for the Blind and the U.S. Patent and Trademark Office, and panel discussions with the National Star Route Mail Contractors’ Association. Regional offices provided similar outreach and training to officials from such federal agencies as the Office of Federal Contract Compliance Programs, Small Business Administration, Social Security Administration, and the U.S. Army Corps of Engineers. Local-level training and outreach included presentations to the Directorate of Contracting at Fort Riley, Kansas, and to employers that have SCA low-wage industry contracts under a Small Business Administration program. In fiscal year 2004, WHD provided training to federal agency contracting officials in the Department of Defense through an arrangement with the Contract Services Association of America, an organization that promotes the use of private contractors for all federal government services. One of the most universal forms of day-to-day compliance assistance that WHD provides is its workplace poster. SCA requires contractors to post the poster at work sites unless the contractor has notified individual employees of their wages and benefits. WHD regulations issued to implement SCA state that the WHD poster (Publication WH 1313), when applicable, shall be posted in a prominent and accessible place at the worksite, and failure to comply with this requirement is a violation of the act and of the contract. WHD’s SCA workplace poster serves a dual purpose of both assistance and enforcement. As an assistance tool, the poster informs service contract employees of their wages, benefits, and other entitlements (overtime and safety and health conditions) under the contract with the federal government. As an enforcement tool, the poster provides evidence that the contractor is subject to SCA and DOL regulations governing service contracts as they relate to employee notification. WHD has designed the poster to be used for both SCA and the Walsh-Healey Act. WHD’s Web site makes this Service Contract Act/Walsh-Healey Poster readily available to the public. While WHD relies heavily on complaints from employees and others to enforce SCA, WHD’s worksite poster does not provide a telephone number for employees or others to call to register complaints. Instead, the poster directs inquiries for information to the Wage and Hour Division offices located in “principal cities.” The poster also directs potential complainants to check their telephone directory under U.S. Government, Department of Labor, Wage and Hour Division. WHD last revised the poster in 1996. A workplace poster that does not provide service contract employees and others with a clear and easy-to-access method of filing a complaint may hamper their reporting of such complaints. In the absence of a telephone point of contact at WHD, service contract employees may not have the opportunity to report possible or suspected violations of the act and therefore may not receive the full benefit of protection authorized under the act. We reported in 2004 that DOL’s Occupational Safety and Health Administration (OSHA) relies heavily on complaints to enforce the Occupational Safety and Health Act. OSHA, in general, responds to complaints according to the seriousness of alleged hazards, a policy that OSHA credits with conserving agency resources. Like WHD, OSHA uses workplace posters as part of its overall compliance assistance enforcement efforts. OSHA’s workplace posters display a universal national telephone number, telephone numbers for each of OSHA’s 10 regional offices, a national number accessible to the hearing impaired, and instructions on how to file a complaint online through OSHA’s Web site. Determining locally prevailing wages for service employees working in hundreds of occupations throughout the nation is a tremendous undertaking and one that WHD is committed to performing with diligence. It is the only organization producing such a vast number of locally prevailing wage rates on a national scale. For their part, WHD analysts have the support of their agency in applying their professional judgment when setting the wage and benefit rates. However, WHD could benefit from greater transparency of its wage determination process. WHD provides limited information on the methodology used to determine SCA wage rates, resulting in analysts receiving numerous inquiries about how they determined wages. Responding to individual requests for explanation diverts analysts from their primary duties of revising and issuing new wage determinations. WHD expressed concerns that providing additional information on its methodology may trigger additional inquiries. However, we believe that additional information could inform some stakeholders, especially those that represent contractors and employees, who could in turn educate their members. As a result, some individuals who otherwise would contact WHD for an explanation on how wages are determined might not see the need to contact WHD. A general description of the methods used in the wage determination process could give SCA stakeholders greater confidence in the determined wage rates and possibly improve the quality of service that WHD provides to those who do inquire. WHD strives to update its list of consolidated wage determinations on an annual basis and provides this information online for the convenience of the contracting agencies. However, the job titles and descriptions included in its SCA directory of occupations have not been regularly updated to include emerging service occupations. WHD has been working closely with various stakeholders over the past 3 years to make changes to the directory, although its ad hoc process of updating the directory calls into question the ongoing currency of the occupations listed in the directory used for wage determinations. WHD’s reliance on complaints as the primary means to identify potential SCA violations is a reasonable strategy to pursue, given WHD’s multiple enforcement responsibilities under numerous federal labor laws. However, that strategy currently does not examine the extent to which other information could be used to improve enforcement nationwide. Without further analysis of prior SCA violation data, WHD cannot ensure that it is using the most effective mix of compliance assistance, complaint-driven investigations, and directed investigations. WHD has readily available data on repeat SCA violators, the analysis of which we believe could be performed with minimal investment of additional resources. Furthermore, by taking extra steps to review prior SCA violation data, WHD may find that its existing complaint-driven approach to SCA enforcement is sound. Finally, because the SCA workplace poster does not provide an easy method for employees to report complaints, WHD may be missing opportunities to get the most use from its complaint process. Improving the workplace poster would reinforce WHD’s complaint-based strategy and would help further protect the wages and benefits of service contract workers. In an effort to provide stakeholders with a general understanding of how WHD determines wage rates, we recommend that the Secretary of Labor: direct WHD to make publicly available the basic methodology WHD uses to issue wage determinations. To better support WHD and federal contracting agencies in their implementation of SCA, we recommend that the Secretary of Labor: direct WHD to develop a procedure for updating the SCA directory of occupations at regular intervals and include criteria for listing and removing occupations as the need emerges. To further WHD’s efforts to obtain better information concerning the presence of and potential for violations involving SCA contracts, we recommend that the Secretary of Labor: direct WHD to analyze its historical SCA contractor violation data in WHISARD, as well as debarment information not included in WHISARD, and to the extent appropriate, use this information to help plan its compliance assistance and investigative efforts, and to identify additional industries, if any, that WHD should establish enforcement goals similar to those it currently has for repeat violators and industries with chronic violations. To facilitate the reporting of SCA complaints, we recommend that the Secretary of Labor: direct WHD to update and revise the 1996 Service Contract Act/Walsh- Healey worksite poster, to include national and regional office telephone numbers and a Web site address that complainants may use to report alleged SCA violations. DOL’s ESA provided us with written comments on a draft of this report, which are reproduced in appendix III. The agency agreed with all of the report’s recommendations. ESA noted that WHD will provide a general description of the methods used in the wage determination process on its Web site and through other avenues. The agency also commented that WHD will develop a plan for implementing our recommendation concerning its SCA directory of occupations. However, the agency cautioned that any plan to do so must take into account the potential for creating confusion when multiple versions of the directory are applicable to various contracts. ESA acknowledged that this problem already exists but believes it would be exacerbated if the directory were updated more frequently. ESA further noted that WHD’s leadership will include an analysis of its SCA enforcement data in establishing its annual priorities at the national level and to specific local and regional initiatives. Finally, ESA noted that WHD will develop and implement a plan to revise the SCA worksite poster by adding WHD’s toll-free telephone number and the agency’s Web site address. ESA noted several technical corrections to the report, as did BLS, which we incorporated as appropriate. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this report. At that time, we will send copies of this report to the Secretary of Labor and the Assistant Secretary of Labor for Employment Standards. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or robertsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who have made major contributions to this report are listed in appendix IV. For this report, we described how the Department of Labor (DOL) (1) establishes locally prevailing wages and fringe benefits and (2) enforces the Service Contract Act (SCA). We also identified potential areas of improvement found in the course of our work. To address these objectives, we: reviewed literature on SCA and its corresponding regulations, and analyzed DOL documents and data; interviewed officials in the Wage and Hour Division’s (WHD) headquarters and field offices, the Bureau of Labor Statistics (BLS), and the two federal contracting agencies with the largest proportion of service contract activity—the Department of Defense (DOD) and the General Services Administration. At DOD, we interviewed the agency labor advisors and officials from each of the four branches of service that oversee the military’s SCA activities. In addition, we interviewed representatives from several service industry unions and key trade associations; analyzed data obtained from DOL, including data on WHD investigations from DOL’s Wage Hour Investigator Support and Reporting Database (WHISARD) database; national, regional and district office training and outreach efforts; file data on debarments; and data from the Federal Procurement Data System (FPDS), including information on the number and total dollar amount of SCA contract actions for fiscal years 2000 through 2003; and interviewed state officials and representatives from private-sector groups who also produce wage and benefit rates in an effort to better understand the relative merits of DOL’s wage determination process. We obtained current and background data from DOL’s WHISARD database for fiscal years 2003 and 2004. Data included the number of SCA investigations, the number of investigations that led to one or more SCA violations, the number of act violations, amounts of back wages and fringe benefits due from contractors, amounts of unpaid wages and benefits that contractors agreed to pay service contract employees, and the number of employees with unpaid wages and benefits. We also obtained file data from WHD on debarred contractors, including the number of debarred contractors, by year and region. We assessed the reliability of the WHISARD data by (1) interviewing agency and contractor officials knowledgeable about the data, and (2) reviewing existing information about the data and the system that produced them, such as the WHISARD User Guide and Procedure Manual; WHISARD data dictionary of tables; and the DOL Inspector General’s fiscal year 2004 Performance and Accountability Review of WHD, which includes WHISARD. We assessed the reliability of the debarment data by interviewing agency officials about the debarment process and the methods used to produce the debarment summary report provided to us. We determined that the required WHISARD data elements and debarment summary data were sufficiently reliable for the purposes of this report. FPDS has been the federal government’s central database of information on federal procurement actions since 1978. It contains detailed information on contract actions over $25,000 and summary data on procurements of less than $25,000. We found in December 2003 that FPDS data were inaccurate and incomplete, and that sufficient problems existed with the system to warrant concern about the reliability of FPDS information. However, in this report, we are using the FPDS data only to provide aggregate information about SCA and to provide context for the report. Although we have determined that the data may be incomplete and certain data elements unreliable, for this report we found that it was sufficiently reliable for estimating a minimum number of federal contracts and federal SCA dollars expended. A newer system, the FPDS-NG (Next Generation), became operational on October 1, 2003. In December 2003, we stated that the reliability of FPDS data was expected to improve with the implementation of the new system. We recently issued a correspondence to the Office of Management and Budget regarding the upgraded system. In addition to the contact named above, Brett S. Fallavollita, Assistant Director, Monika R. Gomez, and Dennis M. Gehley made significant contributions to this report in all aspects of the work throughout the review. In addition, Linda L. Siegel helped to develop our overall design and methodology; Margaret L. Armen and Richard P. Burkard provided legal support; Avrum I. Ashery and Jeremy D. Sebest designed our graphics; Shana B. Wallace provided technical assistance; and Jonathan S. McMurray assisted in report and message development. Department of Labor, Wage and Hour Division, Employment Standards Administration: Service Contract Act; Labor Standards for Federal Service Contracts. OGC-97-14. Washington, D.C.: January 16, 1997. Navy Contracting: Military Sealift Command’s Contract for Operating Oceanographic Ships. NSIAD-90-151. Washington, D.C.: April 18, 1990. Department of Labor: Assessment of the Accuracy of Wage Rates Under the Service Contract Act. HRD-87-87BR. Washington, D.C.: May 28, 1987. Decision of the Comptroller General of the United States, B-218427.2, May 15, 1985, Crowley Towing & Transportation Company. Congress Should Consider Repeal of the Service Contract Act. HRD-83-4. Washington, D.C.: January 31, 1983. Assessment of Federal Agency Compliance with the Service Contract Act. HRD-82-59. Washington, D.C.: July 21, 1982. Service Contract Act Should Not Apply to Service Employees of ADP and High-Technology Companies—A Supplement. HRD-80-102 (A). Washington, D.C.: March 25, 1981. Service Contract Act Should Not Apply to Service Employees of ADP and High-Technology Companies. HRD-80-102. Washington, D.C.: September 16, 1980.
Recipients of federal government contracts for services are subject to wage, hour, benefits, and safety and health standards under the McNamara-O'Hara Service Contract Act (SCA) of 1965, as amended, which specifies wage rates and other labor standards for employees of contractors. SCA requires the Department of Labor (DOL) to set locally prevailing wage rates and other labor standards for employees of contractors furnishing services to the federal government. DOL's Employment Standards Administration's Wage and Hour Division (WHD) administers the SCA and each year determines prevailing wage and fringe benefit rates for over 300 standard service occupations in 205 metropolitan areas. SCA also authorizes DOL to enforce contractor compliance with SCA provisions. This report describes how DOL (1) establishes locally prevailing wages and fringe benefits and (2) enforces SCA. When making a wage determination, WHD analysts consult several sources of information, such as its SCA directory of occupations and data collected through two Bureau of Labor Statistics national wage surveys, for wage data on occupations. Relying on these tools and their own expertise, analysts calculate prevailing wages and fringe benefit amounts for specific geographic locations. The wage determination process produces a wealth of nationwide wage data for service occupations that WHD makes available online and strives to update annually. However, stakeholders (e.g., unions, contractors, employees, and others) contend that the wage determination process is not transparent and that the resulting wages do not necessarily reflect local wage conditions. For example, WHD does not include a description of the methodology used to derive the wage rates in its wage determinations, such as wage data sources used or the procedures analysts follow. As a result, analysts spend considerable time responding to inquiries about the methodology used to determine wages. WHD enforces SCA by conducting investigations, ensuring contractor payments, and providing compliance assistance to stakeholders. WHD investigates complaints from service contract employees, federal agencies, unions, and others who allege that contractors have failed to pay either the wages or fringe benefits, or both, specified in SCA contracts. WHD collects violation data, but it does not fully use these data to plan compliance assistance, target specific service industries or geographic locations for SCA investigation, or set strategic enforcement goals. As a result, WHD may be overlooking some SCA violators and industries that need further enforcement. A review of prior SCA violation data could provide WHD assurance that it is using the most effective mix of available compliance assistance and investigative efforts.
In addition to the 50-50 requirement in 10 U.S.C. 2466, two other title 10 provisions directly affect the reporting of workload allocations to the public and private sectors. Section 2460 defines depot maintenance to encompass material maintenance or repair requiring the overhaul, upgrade, or rebuilding of parts, assemblies, or subassemblies and the testing and reclamation of equipment, regardless of the source of funds or the location at which maintenance or repair is performed. Depot maintenance also encompasses software maintenance, interim contractor support, and contractor logistics support to the extent that work performed in these categories is depot maintenance. The statute excludes from depot maintenance the nuclear refueling of an aircraft carrier, the procurement of major modifications or upgrades of weapon systems, and the procurement of parts for safety modifications, although the term does include the installation of parts for safety modifications. Section 2474 directs DOD to designate public depots as Centers of Industrial and Technical Excellence and to improve their operations so as to serve as recognized leaders in their core competencies. Section 342 of the National Defense Authorization Act for Fiscal Year 2002 (P.L. 107-107, Dec. 28, 2001) amended this statute to exclude qualifying public-private partnerships from the 50-percent funding limitation on contracting in section 2466. Section 342 provides that the funds expended for the performance of depot-level maintenance by nonfederal government personnel located at the centers shall not be counted when applying the 50-percent limitation if the personnel are provided pursuant to a public-private partnership. This exclusion initially applied to depot maintenance funding for fiscal years 2002 through 2005. Section 334 of the National Defense Authorization Act for Fiscal Year 2003 (P.L. 107-314, Dec. 2, 2002) extended this period to include all contracts entered into through fiscal year 2006. The Office of the Secretary of Defense (OSD) has issued guidance to the military departments for reporting public-private workload allocations. The guidance is consistent with the definition of depot-level maintenance and repair in 10 U.S.C. 2460. The military departments have also issued internal instructions to manage the data collection and reporting process, tailored to their individual organizations and operating environments. Based on the congressional mandate regarding the DOD 50-50 requirement, this is the sixth year that we have reported on the prior-year numbers and the fourth year reporting on the future-year numbers. In past years, we have reported on continuing data errors and inconsistencies in reporting by the military departments and problems in documenting and independently validating 50-50 data. We have recommended increasing management attention to and emphasis on the 50-50 reporting process, improving guidance in specific maintenance categories, and implementing better internal controls. We have also observed that the 50-50 process is complex, involving numerous reporting entities and commands, and requiring the incorporation of evolving new concepts of logistics support, changing locations and organizations for accomplishing depot maintenance, and changes in statutory provisions. Service officials told us that the reporting process is somewhat burdensome and time frames for collecting data are constrictive. Further complications in reporting result from relatively high turnover in staff responsible for collecting and managing data and uneven management attention and priority accorded the 50-50 process. Our work has historically been augmented by the efforts of the service audit agencies, which have participated in the 50-50 processes in varying degrees. We have recommended the continued involvement of the auditors to review and validate reporting processes and results and to correct substantial errors and omissions before the 50-50 data are submitted to the Congress. Our prior reports also recognized the limitations of DOD’s financial systems, operations, and controls. Our audits of DOD’s financial management operations have routinely identified pervasive weaknesses in financial systems, operations, and internal controls that impede its ability to provide useful, reliable, and timely financial information for day-to-day management and decision making. In the financial management systems area, DOD continues to struggle in its efforts to implement systems to support managerial decision-making. As we recently reported, DOD can ill afford to invest in systems that are not capable of providing DOD management with more accurate, timely, and reliable information on the results of the department’s business operations. To date, none of the military services or major DOD components has passed the test of an independent financial audit. A continuing inability to capture and report the full cost of its programs represents one of the most significant impediments facing DOD. Nonetheless, the data used to develop the 50-50 report are the only data available and are accepted and used for DOD decision making and for congressional oversight. Table 1 provides a consolidated summary of DOD’s 2003 prior-years and future-years reports to the Congress on public and private sector workload allocations for depot maintenance. The amounts shown are DOD’s record of actual obligations incurred for depot maintenance work in fiscal years 2001 and 2002 and projected obligations for fiscal years 2003-2007 based on the defense budget and service funding baselines. The percentages show the relative allocations between the public and private sectors and the exempted workloads. Adding the private and private-exempted percentages together shows what the private-sector amount would have been reported as, absent the recent legislation to exempt qualified partnership workload. DOD’s prior-years report for fiscal years 2001 and 2002 as submitted to the Congress shows the Departments of the Army and Navy to be below the 50-percent funding limitation on private sector workloads for both years. The Air Force reported itself over the limitation in 2001 and below it in 2002. (See table 1.) The net effects of correcting for the errors and omissions we identified would increase the percentages of workload going to the private sector and move each department closer to the contract limit. Appendix I shows the amounts and effects of our adjustments to the reported data submitted by the military departments for fiscal year 2002 and provides a description of the major deficiencies we found. Overall, however, recurring weaknesses in DOD’s data gathering, reporting processes, and financial systems prevented us from determining with precision whether the services were in compliance with the 50-50 requirement for fiscal years 2001 and 2002. The Army reported its private sector funding to be below the 50-percent limit for both fiscal years 2001 and 2002. Army 50-50 reporting involves multiple commands with somewhat different processes for collecting, summarizing, and validating data. Although the Army utilized a new, more centralized financial system to collect 50-50 data that corrected some of the transcription errors we found last year, we continued to find errors, omissions, and inconsistencies in its data. For example, as in past years, the Army underreported public and private sector depot-level maintenance work at field locations as it continues unfinished efforts to consolidate maintenance activities and better control the proliferation of depot-level tasks at nondepot locations. Other Army work was not reported because some commands did not receive 50-50 instructions and others misapplied the guidance. Unfamiliarity with the guidance was caused in some instances by the large turnover from last year in the staff responsible for collecting and summarizing data. Staff turnover was cited by each of the military services as contributing to increased errors and training needs. To the extent we identified them, these specific errors would add about $228 million in total to the Army’s public and private sector workloads in 2002; the net effect of correcting for these errors would add 2.5 percent to the private sector percentage allocation in 2002. (See table 2 in app. I.) The Navy reported its private sector funding to be below the 50-percent limit for both fiscal years. Similar to the Army, the Navy’s 50-50 process also involves multiple naval commands as well as the Marine Corps. As in prior years, we believe this increases the complexity in managing the process and in ensuring consistency in application of the guidance. It also exacerbates the less than adequate data validation efforts. We identified several problems that carried over from last year’s 50-50 efforts. The Navy did not report any depot maintenance work accomplished along with the nuclear refueling of its aircraft carriers, citing the exclusion of nuclear refueling from the 10 U.S.C. 2460 definition of depot maintenance. We continue to believe that depot repairs not directly associated with refueling tasks should be reported because these kinds of repair actions are reported by other organizations and funding for these tasks are identifiable in contracts and financial systems. The Navy also continues to inconsistently report inactivation activities that involve the servicing and preservation of systems and equipment before it is placed in storage or in an inactive status. Officials report public sector workloads for inactivation activities on nuclear ships but do not report such work on nonnuclear ships, saying that the former workload is complex while the latter is not. We think all such depot-level work should be counted since the statute and implementing guidance does not make a distinction of complexity. These two examples would add about $401 million to the private sector workloads in fiscal year 2002. We also determined that about $41 million of partnership workloads were incorrectly exempted from reporting because the work was not accomplished at a designated depot or was not performed by contract employees. The Marine Corps data are included as part of the Department of the Navy 50-50 report for compliance purposes, but the Corps exercises a separate process for collecting data. Compared to the other services, the Marine Corps has a small depot program but makes more relative errors and has substantial shortcomings in its management oversight and control actions. For example, most of the program offices in the command that is responsible for acquiring and upgrading weapon systems did not report at all. Our review found that this understated the private sector total for fiscal year 2002 by about $32 million and the public sector total by almost $7 million. We also identified other errors including a nearly $19 million overstatement of the public sector when an official incorrectly included obligations from fiscal year 2001 in the total for 2002. On balance, for the Department of the Navy as a whole we found the total dollar amount of errors affected the private sector data more than the public sector. Correcting for the errors we found substantially increases the private sector percentage share in fiscal year 2002 from 42.6 percent to 46.9 percent, a gain of over 4 percent. (See table 3 in app. I.) The Air Force reported that it exceeded the 50-percent funding limitation for the private sector in 2001. As provided by law at the time, the Secretary of the Air Force issued a waiver. The Air Force reported itself back below the limitation for fiscal year 2002. Most of the errors we found were the same or similar from past reviews. For example, the Air Force continues to make a significant adjustment in its reporting for contract administration and oversight costs. The adjustment increases the reported public sector funding and decreases the private sector. The total adjustment was $125 million (in absolute terms) for fiscal year 2002. Consistent with the 50-50 guidance, which states that costs should be associated with the end product (i.e., the repaired item), we think these costs should instead be treated as contracting expenses. Accordingly, we reversed this adjustment in our analysis. The Air Force also continues to count some component repair costs twice, once when the component is repaired and again when it is installed in an equipment item or assembly during a periodic overhaul. Officials said these are both reportable events, while we think this overstates the amount of actual repair work done. Eliminating the double count would affect about $666 million in 2002—a $485 million decrease in the public sector amount and a $181 million decrease in the private sector. As in past years, we also identified many errors in the amounts reported for programs supported by interim and contractor logistics support contracts. We determined that several programs used incorrect factors and assumptions to calculate the depot portion of total contract costs. We found other programs that could not adequately explain or justify their estimating methods—some had been developed years ago by officials no longer in the program and simply applied by new staff without checking their validity nor maintaining adequate supporting documentation to explain and rationalize the results. Relatively high turnover of staff responsible for collecting and managing 50-50 data tends to increase the number and persistence of errors and omissions. In total, the net effect of the errors we found would increase the private sector allocation in 2002 by about 2.7 percent. (See table 4 in app. I.) Because of the changing nature of budget projections and supporting data deficiencies, the future-years report does not provide reasonable estimates of public and private sector depot maintenance funding allocations for fiscal years 2003 through 2007. Furthermore, the services tend to place less emphasis and priority on collecting and validating future-years data. The reported projections are based, in part, on incorrect data, questionable assumptions and estimating factors, and some inconsistencies with existing budgets and management plans. As with the prior years, the net effect of the problems we found generally increases the percentage of funding for projected private sector work. The uncertainty and instability of budget estimates combined with the errors and omissions we found result in a future-years report that is not very useful to congressional and DOD decision makers. We found many of the same problems identified in the prior-years data were continued in the future-years projections. The Army continued to underreport maintenance work at field locations and made other errors similar to its prior-years presentation. While supporting documentation for the Army’s projected data was inadequate, errors and omissions of the same magnitude as fiscal year 2002 would add more than $200 million annually to the totals projected for the public and private sectors in the Army’s future-years report. Similarly, in its respective projections, the Navy continued to not report depot maintenance accomplished with, but not directly related to nuclear refueling; the Marine Corps underreported work from the acquisition command; and the Air Force contract estimates again involved some questionable estimating factors and assumptions. Overall, we found this year as in the past that the services tend to place less emphasis and priority on collecting and validating the future-years data compared to efforts on the prior-years data. Besides errors in reporting, other internal and external factors can create large fluctuations in reported data, which in turn can provide a distorted and misleading view to outside observers about efforts to remain compliant with the 50-50 requirement. For example, in the current future- years report, the Air Force’s projected public-sector work financed through the working capital fund is about $3.0 billion higher than the total amount reported for the same 4-year time period in the future-years report submitted in 2002. Although this would appear to indicate a large influx of new work to the public depots, in reality the amount of work, according to budget estimates and management reports, is expected to remain fairly level during this reporting period in terms of production hours and size of workforce. Most of the dollar (and percentage) increase in public-sector work is the result of price hikes in the sales rate charged to its customers. Price hikes were caused primarily by increases in the cost of spare and repair parts that were used in the repair process. The future-year estimates are not reasonable because they represent budget and planning data that change over time, incorporate the same errors found in prior-year data, and also have other problems. The budget and planning data used to project the share of depot maintenance work to be performed in the public and private sectors in the future are estimates. At best, they provide only rough estimates of future funding allocations; and these estimates change over time. As an illustration, our comparison of the consistency of the 2003 reported data with that in DOD’s 50-50 reports submitted in 2002 showed that congressional and DOD decision makers were given quite a different view this year of the public-private sector workload mix than that presented just last year. With so many errors and frequent changes, the future-years data may be misleading and not very useful to congressional and DOD decision makers, particularly the further estimates are in the future. While we have identified these shortcomings in the past, the problems continue and show no signs of getting better. DOD officials agreed that the planning and budget data available for making future projections beyond the budget year are not very useful as predictors of the balance of future workloads between the public and private sectors. They also noted that when the services are within a few percentage points of the 50-50 ceiling, as they are now, the accuracy of the conclusions drawn from the unreliable future projections does not provide a very good basis for forecasting the future. Despite prior improvements, opportunities continue to exist to make 50-50 data a more complete and accurate representation of the balance of funding for depot maintenance work assigned to the public and private sectors. First, streamlining the 50-50 report would offer an opportunity to focus improvement efforts on the data where improvements are most likely to be realized. Second, continued participation of the service audit agencies should improve the quality of the 50-50 data, particularly if the audit support is timely to allow for corrections to be made before the 50-50 report goes to the Congress. Finally, there are opportunities to improve the data development process. As previously discussed, the future-years dataparticularly that estimated for the years beyond the current year and budget yeardo not provide a reasonable estimate of the future balance of funding for depot maintenance between the public and private sectors. Further, the data may be so bad as to be misleading. Streamlining the data collected to provide data for a shorter period of time could allow responsible officials to focus more closely on the data that are more accurate. Additionally, if the report date to the Congress were extended, the report could be based on more actual costs and require fewer projections, improving the quality of the reported data. While we continue to believe that the service audit agencies could help the military departments improve 50-50 reporting, their future involvement is uncertain. As we have reported in the past, auditor involvement typically identified and corrected substantial errors in the data before the 50-50 reports went to the Congress. However, this year the Air Force Audit Agency did not participate; while the Army did participate, some of the errors they identified were not corrected in the reports to the Congress; and the Navy audit was not done in time to result in changes to the 50-50 data submitted to the Congress. A more meaningful review would be one that was carried on when the data are being aggregated, with input to the process in time to influence the reported data. DOD officials told us that the audit services were not expecting to work on future 50-50 efforts. Audit services are reconsidering their roles because of recent changes to government auditing standards regarding auditor independence when performing both audit and nonaudit management assistance services to the same client. Air Force auditors have had a positive role in the 50-50 process in past years. Serving in an advisory capacity, they identified errors and cognizant program officials made corrections before the Air Force input was finalized and forwarded to the Office of the Secretary of Defense. This year, however, Air Force auditors decided not to participate. Officials said they were concerned about conflict of interest because auditors participating in the management services review could also be involved in audit service reviews of depot maintenance programs, processes, and funds. While Army auditors participated in the process during this year’s cycle and some of their work influenced changes in this year’s reported data, some errors were not corrected because of time constraints imposed by the 50-50 reporting schedule. Army officials said the Army Audit Agency would not likely be involved in next year’s 50-50 process primarily because of concerns about independence. Navy auditors became involved in the process this year after we recommended their participation in prior reports. However, the Navy Audit Service work was not done in time to influence the Navy’s 50-50 report. According to audit service officials, their decision to do an audit of the data after it was submitted rather than providing advisory services to cognizant officials developing the Navy’s 50- 50 report was influenced by the before-mentioned change in audit standards. Navy program officials said that because a post-process audit did not improve the 50-50 data, the audit service would not be used next year. We recognize that recent changes in government auditing standards have been made to better address and specify independence issues arising when an audit organization undertakes both audit and nonaudit services for the same client. Nonetheless, the new auditing standards do not preclude auditors from verifying the accuracy of data; providing other technical assistance to the 50-50 process; and accomplishing other audits of the depot maintenance process, programs, and activities. Improved planning, management involvement, and documentation of roles and responsibilities may be required; but a process can be developed to ensure independence will not be compromised. This has already been done so that the service audit agencies can perform similar functions—evaluating validity and consistency of data as it is being developed for subsequent decision making—in support of the base realignment and closure process. Incremental improvements in data development were noticeable in the first several years of 50-50 reporting as guidance was clarified and expanded. However, as we reported last year, the quality of the 50-50 data is not continuing to improve as it did in earlier years of the reporting requirement. Overall quality and direction of DOD’s reporting seems to have reached a plateau where further major improvements have been limited. As we have previously discussed, one of the reasons this has occurred is that 50-50 guidance was not always distributed to the people who needed it. Further, significant turnover of personnel responsible for developing the data without providing sufficient time and training to familiarize them regarding the 50-50 requirement and process adversely affected the quality of the 50-50 data. In short, the priority afforded this process by management at all levels in the department is not sufficient to ensure that the data are as accurate as possible. Continuing errors and omissions in the data for both the prior- and future- years reports indicate that each of the service components is closer to exceeding the limitation on percentage of work permitted to be performed by the private sector than DOD’s reporting would indicate. At best, DOD’s data over time should be treated as providing a rough approximation of the allocation of depot maintenance workloads between the public and private sectors with some indication of trends. As such, the information on actual prior-years allocations can be useful to the Congress in its oversight role and to DOD officials in deciding support strategies for new systems and in evaluating depot policies and practices. On the other hand, because it provides an increasingly less reliable estimate of projected allocations the further it gets from the current year, the future-years report is not a very useful tool for informing the Congress or DOD officials about likely future compliance. This occurs because of the changing nature of projections, a combination of errors and omissions, less emphasis by the services on the collection and validation of future-years data, and the use of ever-changing budgetary estimates to construct projections. These budgetary estimates—and built-in assumptions—become more inexact and more problematic the further into the future the projections are made due to their very speculative and volatile natures. Indeed, tracking the 50- 50 projected data from year to year reveals wide swings in the total amounts reported and in the relative allocations to the public and private sectors. As a result, congressional and DOD decision makers were given quite a different view this year of the public-private sector workload mix than that presented just last year. We believe that these problems are likely to continue and we question the cost-effectiveness of collecting and aggregating data for 3 years past the current and budget years given the problems identified with the estimates. Furthermore, after the first several years of 50-50 reporting, the overall quality of DOD reporting in terms of accuracy and completeness has not improved significantly. Indeed, the overall quality and direction seem to have reached a plateau where further major improvements to reporting may be unachievable and where the environmental factors that complicate reporting are not expected to change much. These complicating factors— including a burdensome collection process, tight timeframes for collecting data, high staff turnover, uneven management attention, changing concepts about maintenance organization and delivery—present continued challenges to the services in their ability to make significant improvements to their collection, documentation, and reporting processes. Notwithstanding these constraints, opportunities still exist to improve the reporting, including continued use of the audit services and renewed efforts to ensure guidance is appropriately disseminated and staff trained in its use. Given that we continue to see the same problems and complicating factors in our current and past assessments of 50-50 reports and considering that the volatile nature of budget estimates is not likely to change, the Congress should consider amending 10 U.S.C. 2466 to require only one annual 50-50 report. The single report would cover a 3-year period (prior year, current year, and budget year) for which the data are generally more reliable and the potential impacts more immediate. The Congress should also consider extending the due date for the single report from February 1 of each year to April 1; this would provide more time for the military departments to collect and validate data and allow for the incorporation of more actual cost data for the current year estimate. To enhance data verification and validation, we recommend that the Secretary of Defense require the secretaries of the military departments to direct the use of service audit agencies, or an agreed-upon alternate method, for third-party review and validation of 50-50 data and to ensure that auditor-identified errors in the data are rectified before reports are submitted to the Congress. To ensure consistent and complete reporting, we recommend that the Secretary of Defense direct the secretaries of the military departments to ensure that 50-50 reporting guidance is appropriately disseminated to reporting organizations and individuals and that staff are properly and timely trained in the application of the guidance. In written comments on a draft of this report from the Deputy Under Secretary of Defense for Logistics and Materiel Readiness, DOD concurred with the report’s recommendations. However, the department did not agree with limited portions of our analyses regarding some selected workloads and the resulting impacts on the percentage allocation of funds between the public and private sectors. These workloads involve the Navy’s nuclear carrier refueling and surface ship inactivation and the Air Force’s adjustment for general and administrative expenses and double counting of some reparable workloads. DOD’s written comments, and our evaluation of these items in question, are reprinted in appendix III. We are sending copies of this report to congressional committees; the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; and the Director, Office of Management and Budget. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has questions regarding this report, please contact me at (202) 512-8412 or holman@gao.gov or Julia Denman, Assistant Director, at (202) 512-4290 or denmanj@gao.gov. Other major contributors to this report were David Epstein, Bruce Fairbairn, Jane Hunt, Larry Junek, Robert Malpass, Andy Marek, Marjorie Pratt, John Strong, and Bobby Worrell. Our review of the data supporting the Department of Defense’s (DOD) prior-years report identified errors, omissions, and inconsistencies that, if corrected, would revise the total workloads and increase the private- sector allocations for each of the military departments. Brief descriptions of the larger and more extensive problems found follow the adjusted figures. Our review of fiscal year 2002 data reported by the Army and of supporting documentation for selected activities identified errors, omissions, and inconsistencies that, if corrected, would result in significant adjustments in the public and private sector percentages reported to the Congress, as shown in table 2. Errors we found included the following examples: Unreported depot-level work associated with the Army’s ongoing efforts to consolidate maintenance activities and craft a national maintenance program. Our prior 50-50 reports have documented continuing problems and shortcomings in accurately and consistently reporting depot maintenance accomplished by both public and private sector sources at nondepot locations. Unreported one-time repair actions. These are depot repairs that are accomplished at non-depot locations following an organization’s request and approval to do this work on a limited basis. Unreported work by commands that did not receive Army reporting guidance and other misreported and understated work by some commands that received but misapplied the guidance. Other adjustments included (1) errors identified by the Army Audit Agency but not corrected in the data sent to the Office of the Secretary of Defense (OSD) for inclusion in the prior-years 50-50 report to the Congress; and (2) depot support work identified in a contractor’s study of the proliferation of depot work at non-depot locations. Our review of fiscal year 2002 data reported by the Navy and Marine Corps and of supporting documentation for selected activities identified errors, omissions, and inconsistencies that, if corrected, would result in significant adjustments in the public and private sector percentages reported to the Congress, as shown table 3. Errors we found included the following examples: Unreported depot work on nuclear aircraft carriers. As reported last year, Navy officials cite the definition in 10 U.S.C. 2460, which excludes from depot maintenance the nuclear refueling of aircraft carriers, in justifying why they do not report any of the depot work accomplished at the same time as refueling. We believe that depot work that is reportable elsewhere and separate from the refueling tasks should be reported. Inconsistent reporting of ship inactivations, which include depot tasks for servicing and preserving equipment before they are placed in storage or in an inactive status. Navy officials report for 50-50 purposes the nuclear ship inactivation work performed in the public sector but do not report surface ship inactivation work performed by the private sector. Underreporting of maintenance work by the command responsible for acquiring and upgrading Marine Corps weapon systems. Failure to report has several causes, including misunderstanding of what should be reported, limited dissemination of the 50-50 guidance, and inadequate management and oversight of the collection process to identify and resolve reporting deficiencies. Incorrectly exempting some private-sector activities from reporting. The Navy exempted more work than did the other departments; but we found some in error, including partnering work accomplished at a contractor facility and some work actually performed by government employees. Partnership work qualifying for the exemption must be accomplished at designated public depots by contractor employees. Other errors included (1) work subcontracted by the public shipyards to the private sector reported as public sector work and (2) misreporting by the Marine Corps of work obligated in fiscal year 2001 rather than 2002. Our review of fiscal year 2002 data reported by the Air Force and of supporting documentation for selected activities identified errors, omissions, and inconsistencies that, if corrected, would result in significant adjustments in the public and private sector percentages reported to the Congress, as shown in table 4. Errors we found included the following examples: As in past years, Air Force officials continue to adjust the 50-50 data for the salaries and overhead expenses of government employees administering depot maintenance contracts funded through the working capital fund. Officials subtract these amounts from the reported private sector amount—where they are accounted for within the working capital fund—and add them to the public sector funding for 50-50 reporting. Consistent with the 50-50 guidance that states that costs should be associated with the end product, we think these costs should be treated as contracting expenses. Our review of Air Force workloads determined that funding for some component repairs was counted twice in 50-50 data, once when the item was repaired and the second time when it was installed into a weapon system or major subsystem during its overhaul. This resulted in overstating both public sector work and, by a lesser amount, private sector work. Errors occurred in reporting depot costs on interim contractor support and contractor logistics support contracts. Our review of selected programs identified numerous errors resulting in net underreporting of depot maintenance work performed by contractors. Many problems resulted from questionable factors and assumptions used in developing estimating methodologies. Because interim contractor support and contractor logistics support contracts often cover more than just depot maintenance (including lower levels of maintenance, supply operations, and logistics program management), the OSD guidance allows for the use of estimating methods. This can cause complications and introduce subjectivity into the data collection process. Newer contract approaches under acquisition reform efforts pose particularly challenging problems in identifying the depot portion. Examples of errors and questionable practices we found included not updating a methodology when contract provisions and circumstances change, resulting in not reporting additional maintenance work from increased operational contingencies and new orders of materials; assuming a straight percentage of total cost as depot work where data exists to make a more exact accounting; not reporting maintenance on a newly acquired modification; and not reporting software depot maintenance. To determine whether the military departments met the 50-50 requirement in the prior-years report, we analyzed each service’s procedures and internal management controls for collecting and reporting depot maintenance information for purposes of responding to the section 2466 requirement. We reviewed supporting details (summary records, accounting reports, budget submissions, and contract documents) at departmental headquarters, major commands, and selected maintenance activities. We compared processes to determine consistency and compliance with legislative provisions, OSD guidance, and military service instructions. We selected certain programs and maintenance activities for a more detailed review. We particularly examined reporting categories that DOD personnel and we had identified as problem areas in current and past reviews. These areas included interserviced workloads, contractor logistics support, warranties, software maintenance, and depot maintenance at nondepot locations. We evaluated processes for collecting and aggregating data to ensure accurate and complete reporting and to identify errors, omissions, and inconsistencies. We coordinated our work, shared information, and obtained results of the Army and Air Force service audit agencies’ data validation efforts. To determine whether the future-year projections were based on accurate data, valid assumptions, and existing plans and represented reasonable estimates, we followed the same general approach and methodology used to review the prior-years report. Although the future-years report is a budget-based projection of obligations, the definitions, guidance, organization, and processes used to report future data are much the same as for the prior-years report of actual obligations. We discussed with DOD officials the main differences between the two processes and the manner in which the data were derived from budgets and planning requirements and key assumptions made in the outyear data. For reviews of both 50-50 reports, we performed certain checks and tests, including variance analyses, to judge the consistency of this information with data from prior years and with the future-years budgeting and programming data used in DOD’s budget submissions and reports to the Congress. For example, we compared each service’s 50-50 data reported in February and April 2003 for the period 2001 through 2006 with data reported for these same years in the 50-50 reports submitted in 2002. We found repeated and significant changes, even though the estimates were prepared only about 1 year apart. We used this analysis to further discuss with officials and analyze reasons for changes in reported data and percentage allocations between the 2002 and 2003 reports submitted to the Congress. Variance analysis showed that congressional and DOD decision makers were given quite a different view of the public-private sector workload mix than that presented just last year. Several factors concerning data validity and completeness were considered in our methodology and approach to reviewing the prior- and future-years reports. One key factor is the continuing deficiencies we have noted in DOD’s financial systems and reports that preclude a clean opinion on its financial statements and that result in limited accuracy of budget and cost information. Another factor is that documenting depot maintenance workload allocations between the public and private sectors is becoming more complicated by the consolidation of maintenance activities and the performance of depot-level maintenance at field locations. These complicating factors (1) make it more difficult to identify work that meets the statutory definition of depot maintenance, (2) complicate workload reporting, and (3) result in underreporting of depot maintenance for both the public and private sectors. In addition, changes in business philosophy and approach can make analysis more difficult. For example, many new contracts are performance-based and may not discretely identify maintenance activities or account separately for their costs. This can result in under- and overreporting of depot maintenance work performed in the private sector. It also forces more reliance on the contractor for providing information needed in 50-50 reporting and may result in DOD officials having to use more assumptions and estimating methodologies in lieu of contract data. As part of our efforts to identify areas for improvement, we reviewed DOD’s efforts to improve the accuracy and completeness of reports. We discussed with officials managing and coordinating the reporting process their efforts to address known problem areas and respond to recommendations by the audit agencies and us. We compared this year’s sets of instructions with last year’s to identify changes and additions. We reviewed efforts to identify reporting sources and to distribute guidance and taskings. We asked primary data collectors to provide their opinions on how well efforts were managed and data verified and to identify “pain points” and ideas they had to improve reporting. We reviewed prior recommendations and service audit agency findings to determine whether known problem areas were being addressed and resolved. We applied this knowledge to identify additional areas for improving the reporting process and management controls. We interviewed officials, examined documents, and obtained data at OSD, Army, Navy, Marine Corps, and Air Force headquarters in the Washington, D.C., area; Army Materiel Command in Alexandria, Virginia; Naval Sea Systems Command in Washington, D.C.; Naval Air Systems Command in Patuxent River, Maryland; Marine Corps Materiel Command in Albany, Georgia; Air Force Materiel Command in Dayton, Ohio; Army Audit Agency in Washington, D.C.; Naval Audit Service in Crystal City, Virginia; several public depots managed by the military departments’ materiel commands; and selected operating bases. We conducted our review from February to July 2003 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Department of Defense’s letter dated August 26, 2003. 1. The department did not agree with our adjustment for nuclear aircraft carriers. The Navy interprets the 10 U.S.C. 2460 exclusion of nuclear refueling of aircraft carriers from the definition of depot maintenance to mean that no work associated with the refueling complex overhaul of nuclear carriers is reportable for 50-50 purposes. Navy officials also said that non-nuclear depot repairs on carriers are not severable tasks to be split out from contracts. We continue to believe that the costs of depot repairs and tasks not directly associated with nuclear refueling tasks during carrier overhauls should be reported. Many maintenance tasks performed at the same time as the nuclear refueling are not related to the refueling; and when these and similar tasks are performed during other maintenance activities, the Navy does report them as depot maintenance. We found that the funding for these tasks is clearly identifiable in the contract financial records and could be counted just like other 50-50 work. In our view, without some nexus between that work and refueling work, it would be inconsistent with the plain language of section 2460 to exempt that work simply because it was performed during a refueling complex overhaul of nuclear carriers. We deleted the reference to severable tasks in the body of the report, as our intent was not to suggest that the Navy break out non- nuclear work from nuclear work onto separate contracts or work orders, but rather that the funding for non-nuclear refueling work accomplished on existing contracts be identified and reported. 2. The department did not agree with our adjustment for surface ship inactivations. DOD considers nuclear ship inactivation work to be a relatively complex process that is equivalent to depot level maintenance, but that conventional ship inactivation work performed by the private sector is not as complex and is not equivalent to depot- level maintenance. In addition, the department’s written response indicated that surface ship inactivation work accomplished by the public sector is also not reported in the 50-50 data. We believe that inactivation work should be reported because the relevant title 10 statutes and OSD’s 50-50 guidance do not make this distinction of relative complexity and requires reporting of all depot maintenance, regardless of location and source of funding. Further, DOD’s Financial Management Regulation 7000.14-R, vol. 6A, ch. 14 (which prescribes depot maintenance reporting requirements) includes inactivation as a depot maintenance activity. Although we did not review inactivation work accomplished by public sector workers, it should also be reported if it meets the definition of depot maintenance. 3. The Air Force did not agree with our reversal of the 50-50 reporting adjustment it makes for the salaries and overhead expenses of government employees administering depot maintenance contracts. The Air Force believes that the costs for government personnel managing depot maintenance contracts represent public sector costs; therefore, to report them as contract would misrepresent the public- private sector percentage allocations. However, OSD’s 50-50 guidance requires that all the costs associated with accomplishing a specific depot workload—labor, material, parts, indirect, and overhead— should be counted for 50-50 purposes in the sector accomplishing the actual maintenance. The guidance cites examples, such as counting the contract maintenance on depot plant equipment as public sector costs because the plant equipment is part of the costs incurred to perform maintenance at the depot. Similarly, we think that contract administrative costs should be counted as part of the costs incurred to accomplish the work in the private sector. We note that the Air Force will stop making this adjustment after this year when financing for the depot contracts is moved from the working capital fund to direct appropriations. It remains to be seen, however, how the Air Force will account for contract administrative expenses in the future. 4. The department did not agree that counting the repair costs twice for some components installed in higher level assemblies is inconsistent with the statutory requirements of 10 U.S.C. 2466(e) and 10 U.S.C. 2460. The Air Force believes that the original repair cost for a component and its subsequent cost as material used in system or subsystem overhaul are two distinct and separate transactions and that both costs should be reported for 50-50 purposes. We continue to believe that counting some component repair costs twice when the components are incorporated in a higher-level assembly distorts the 50-50 reports and the actual amount of work accomplished by both the public and private sectors. In our view, there is no reason to conclude that the intent of title 10 requires double counting component repairs and that a more reasonable reading is that DOD can implement those provisions so as to allow for adjustments in reporting to more accurately reflect the cost of depot work. DOD adopted a similar approach in response to a recommendation in our 2001 report. In that report, we found that unrealistic and outdated budget data were being reported when there were other, more accurate information sources. Accordingly, OSD revised its 50-50 guidance to allow for revising budgetary estimates to better reflect known and anticipated changes in workloads, workforce, priorities, and performance execution rates. This resulted in the Air Force reporting additional hundreds of millions of dollars in projected depot work based on current workload estimates. A similar approach could be used to eliminate the effects of double counting reparables later used in higher-level assemblies. Depot Maintenance: Key Unresolved Issues Affect the Army Depot System’s Viability. GAO-03-682. Washington, D.C.: July 7, 2003. Department of Defense: Status of Financial Management Weaknesses and Progress Toward Reform. GAO-03-931T. Washington, D.C.: June 25, 2003. Depot Maintenance: Change in Reporting Practices and Requirements Could Enhance Congressional Oversight. GAO-03-16. Washington D.C.: October 18, 2002. Depot Maintenance: Management Attention Needed to Further Improve Workload Allocation Data. GAO-02-95. Washington, D.C.: November 9, 2001. Defense Logistics: Actions Needed to Overcome Capability Gaps in the Public Depot System. GAO-02-105. Washington, D.C.: October 12, 2001. Defense Maintenance: Sustaining Readiness Support Capabilities Requires a Comprehensive Plan. GAO-01-533T. Washington, D.C.: March 23, 2001. Depot Maintenance: Key Financial Issues for Consolidations at Pearl Harbor and Elsewhere Are Still Unresolved. GAO-01-19. Washington, D.C.: January 22, 2001. Depot Maintenance: Action Needed to Avoid Exceeding Ceiling on Contract Workloads. GAO/NSIAD-00-193. Washington, D.C.: August 24, 2000. Depot Maintenance: Air Force Waiver to 10 U.S.C. 2466. GAO/NSIAD-00-152R. Washington, D.C.: May 22, 2000. Depot Maintenance: Air Force Faces Challenges in Managing to 50-50 Ceiling. GAO/T-NSIAD-00-112. Washington, D.C.: March 3, 2000. Depot Maintenance: Future Year Estimates of Public and Private Workloads Are Likely to Change. GAO/NSIAD-00-69. Washington, D.C.: March 1, 2000. Depot Maintenance: Army Report Provides Incomplete Assessment of Depot-type Capabilities. GAO/NSIAD-00-20. Washington, D.C.: October 15, 1999. Depot Maintenance: Status of the Navy’s Pearl Harbor Project. GAO/NSIAD-99-199. Washington, D.C.: September 10, 1999. Depot Maintenance: Workload Allocation Reporting Improved, but Lingering Problems Remain. GAO/NSIAD-99-154. Washington, D.C.: July 13, 1999. Navy Ship Maintenance: Allocation of Ship Maintenance Work in the Norfolk, Virginia, Area. GAO/NSIAD-99-54. Washington, D.C.: February 24, 1999. Defense Depot Maintenance: Public and Private Sector Workload Distribution Reporting Can Be Further Improved. GAO/NSIAD-98-175. Washington, D.C.: July 23, 1998. Defense Depot Maintenance: DOD Shifting More Workload for New Weapon Systems to the Private Sector. GAO/NSIAD-98-8. Washington, D.C.: March 31, 1998. Defense Depot Maintenance: Information on Public and Private Sector Workload Allocations. GAO/NSIAD-98-41. Washington, D.C.: January 20, 1998. Defense Depot Maintenance: Uncertainties and Challenges DOD Faces in Restructuring Its Depot Maintenance Program. GAO/T-NSIAD-97-112. Washington, D.C.: May 1, 1997. Also GAO/T-NSIAD-97-111. Washington, D.C.: March 18, 1997. Defense Depot Maintenance: DOD’s Policy Report Leaves Future Role of Depot System Uncertain. GAO/NSIAD-96-165. Washington, D.C.: May 21, 1996. Defense Depot Maintenance: More Comprehensive and Consistent Workload Data Needed for Decisionmakers. GAO/NSIAD-96-166. Washington, D.C.: May 21, 1996. Defense Depot Maintenance: Privatization and the Debate Over the Public-Private Mix. GAO/T-NSIAD-96-148. Washington, D.C.: April 17, 1996. Also GAO/T-NSIAD-96-146. Washington, D.C.: April 16, 1996. Depot Maintenance: Issues in Allocating Workload Between the Public and Private Sectors. GAO/T-NSIAD-94-161. Washington, D.C.: April 12, 1994.
Under 10 U.S.C. 2466, not more than 50 percent of each military department's annual depot maintenance funding can be used for work done by private-sector contractors. The Department of Defense (DOD) also must submit two reports to the Congress annually on the division of depot maintenance funding between the public and private sectors--one about the percentage of funds spent in the previous 2 fiscal years (prior-years report) and one about the current and 4 succeeding fiscal years (future-years report). As required, GAO reviewed the two DOD reports submitted in early 2003 and is, with this report, submitting its views to the Congress on whether (1) the military services met the so-called "50-50 requirement" for fiscal years 2001-2 and (2) the projections for fiscal years 2003-7 are reasonable estimates. GAO also identified opportunities to improve the reporting process. Continuing weaknesses in DOD's data gathering, reporting processes, and financial systems prevented GAO from determining with precision if the military services complied with the 50-50 requirement in fiscal years 2001-2. DOD data show all the services, except the Air Force in fiscal year 2001, to be below the 50-percent funding limit on private sector work. However, as before, GAO found errors in the data that, if corrected, would overall increase funding of the private sector and move each service closer to the contract limit. For example, for fiscal year 2002, the Navy did not include about $401 million in private sector maintenance work on aircraft carriers and surface ships. Correcting for these and other errors would increase the Navy's percentage of private sector depot maintenance funds for that year from the 42.6 percent reported to 46.9 percent. Such data weaknesses show that prior-years reports do not precisely measure the division of maintenance funding. At best, over time these results provide rough approximations and indicate trends that may be useful to decision makers. Because of data deficiencies and changing budget projections, the futureyears report does not provide reasonable estimates of public and private sector maintenance funding for fiscal years 2003-7 and limits its usefulness to decision makers. GAO reported this shortcoming in the past, and problems continue. For example, the Army underreported maintenance work at nondepot locations as it continues to consolidate the work and better control it at such locations. Other Army work was not reported because some commands did not receive guidance and others misapplied it. These errors would add about $200 million annually to the Army's future estimate and increase the percent of projected funding in the private sector. Opportunities still exist for improvements, including for streamlining the 50- 50 reports, continued service audit agency support, and data development. Streamlining the 50-50 reports could help address problems caused by, among other factors, inexact program estimates. Second, although DOD is concerned that recent revisions to federal audit standards could keep service auditors from further participation in the 50-50 process, GAO believes that a way can be developed to enable auditors' continued support yet ensure their independence. Third, data development could be helped by better disseminating guidance and training participating personnel.
JPDO has continued to make progress in facilitating the collaboration that is central to its mission and in furthering its key planning documents. However, JPDO faces a number of challenges involving its organizational structure, institutionalization of its efforts, research and development activities, and stakeholder participation. Vision 100 includes requirements for JPDO to coordinate and consult with its partner agencies, private sector experts, and the public. JPDO’s approach has been to establish an organizational structure that involves federal and nonfederal stakeholders throughout the organization. This structure includes a federal interagency senior policy committee, a board of directors, and an institute to facilitate the participation of nonfederal stakeholders. JPDO’s structure also includes eight integrated product teams (IPT), which is where the federal and nonfederal experts come together to plan for and coordinate the development of technologies for NextGen. The eight IPTs are linked to eight key strategies that JPDO developed early on for guiding its NextGen planning work (see table 1). JPDO’s senior policy committee is headed by the Secretary of Transportation (as required in Vision 100) and includes senior-level officials from JPDO’s partner agencies. The Next Generation Air Transportation System Institute (the Institute) was created by an agreement between the National Center for Advanced Technologies and FAA to incorporate the expertise and views of stakeholders from private industry, state and local governments, and academia. The Institute Management Council (IMC), composed of top officials and representatives from the aviation community, oversees the policy, recommendations, and products of the Institute and provides a means for advancing consensus positions on critical NextGen issues. The IPTs are headed by representatives of JPDO’s partner agencies and include more than 200 nonfederal stakeholders from over 100 organizations, whose participation was arranged through the Institute. Figure 1 illustrates JPDO’s position within FAA and the JPDO structures that bring together federal and nonfederal stakeholders, including the Institute and the IPTs. To meet Vision 100’s requirement that JPDO coordinate and consult with the public, the Institute held its first public meeting in March 2006 and plans to hold another public meeting in May 2007. Deprtment of Trporttion (chir) Hrmoniztion (FAA) Infrastrctre (FAA) Situationl Awreness (DOD) (DOC) Mgement (FAA) In November 2006, we reported that JPDO’s organizational structure incorporated some of the practices that we have found to be effective for federal interagency collaborations—an important point given how critical such collaboration is to the success of JPDO’s mission. For example, the JPDO partner agencies have worked together to develop key strategies for NextGen and JPDO has leveraged its partner agency resources by staffing various levels of its organization with partner agency employees. Also, our work has shown that involving stakeholders can, among other things, increase their support for a collaborative effort, and the Institute provides a method for involving nonfederal stakeholders in planning NextGen. Recently, JPDO officials told us they have proposed to FAA management and the IMC executive board a change in the IPT structure and operation to improve the efficiency of the organization. JPDO has proposed converting each IPT into a “work group” with the same participants as the current IPT, but with each work group led by a joint government and industry steering committee. The steering committee would oversee the creation of small, ad hoc subgroups that would be tasked with short-term projects exploring specific issues and delivering discrete work products. Under this arrangement, work group members would be free of obligations to the group when not engaged in a specific project. According to JPDO officials, if these changes are approved, the work groups would be more efficient and output- or product-focused than the current IPTs. JPDO officials also noted that they are proposing to create a ninth work group to address avionics issues. We believe that these changes could help address concerns that we have heard from some stakeholders about the productivity of some IPTs and the pace of the planning effort at JPDO. Nonetheless, the effectiveness of these changes will have to be evaluated over time. Also, JPDO’s director has pointed out the need for the office to begin transitioning from planning NextGen to facilitating the implementation of NextGen. We believe that these changes are potentially useful in supporting such a transition. However, it will be important to monitor these changes to ensure that the participation of stakeholders is neither decreased nor adversely affected. Maintaining communications within and among work groups could increase in importance if, as work group members focus on specific projects, they become less involved in the overall collaborative planning effort. Finally, while the organizational structure of JPDO and the Institute have been in place and largely unchanged for several years now, both of these entities have suffered from a lack of stable leadership. As JPDO begins its fourth year in operation, it is on its third director and operated during most of 2006 under the stewardship of an acting director. The Institute pointed out in its recent annual report that JPDO’s leadership turnover had made it a challenge for JPDO to move out more aggressively on many goals and objectives, as the office waited on a full-time director. The Institute also stated that JPDO’s leadership turnover had limited the ability of the IMC executive committee to forge a stronger relationship with JPDO leadership and work jointly on strategic issues and challenges. However, the Institute has also had issues with turnover and is currently functioning under an acting director due to the recent departure of its second director, who had been in the position less than two years. The leadership turnovers at both JPDO and the Institute raise concerns about the stability of JPDO and about the impact of these turnovers on the progress of the NextGen initiative. JPDO’s authorizing legislation requires the office to create a multi-agency research and development plan for the transition to NextGen. To comply, JPDO is developing several key documents that together form the foundation of NextGen planning. These documents include a NextGen Concept of Operations, a NextGen Enterprise Architecture, and an Integrated Work Plan. The Concept of Operations is the most fundamental of JPDO’s key planning documents, as the other key documents flow from it. Although an earlier version was delayed so that stakeholder comments could be addressed, Version 1.2 of the Concept of Operations is currently posted on JPDO’s Website for review and comment by the aviation community. This 226-page document provides written descriptions of how the NextGen system is envisioned to operate in 2025 and beyond, including highlighting key research and policy issues that will need to be addressed. For example, some key policy issues are associated with automating the air traffic control system, including the need for a backup plan in case automation fails, the responsibilities and liabilities of different stakeholders during an automation failure, and the level of monitoring needed by pilots when automation is ensuring safe separation between aircraft. Over the next few months, JPDO plans to address the public comments it receives and issue a revised version of the Concept of Operations. In addition to the Concept of Operations, JPDO is working on an Enterprise Architecture for NextGen—that is, a technical description of the NextGen system, akin to blueprints for a building. The Enterprise Architecture is meant to provide a common tool for planning and understanding the complex, interrelated systems that will make up NextGen. According to JPDO officials, the Enterprise Architecture will provide the means for coordinating among the partner agencies and private sector manufacturers, aligning relevant research and development activities, and integrating equipment. JPDO plans to issue an early version of its Enterprise Architecture next month, although it was originally scheduled for release in September 2006. Finally, JPDO is developing an Integrated Work Plan that will describe the capabilities needed to transition to NextGen from the current system and provide the research, policy and regulation, and schedules necessary to achieve NextGen by 2025. The Integrated Work Plan is akin to a project plan and will be critical for fiscal year 2009 partner agency budget and program planning. According to a JPDO official, the office intends to issue its initial draft of the Integrated Work Plan in July 2007. We have discussed JPDO’s planning documents with JPDO officials and examined both an earlier version of JPDO’s Concept of Operations and the current version that is out for public comment. Based on our analysis, JPDO is focusing on the right types of key documents for the foundation of NextGen planning. As for the Concept of Operations, the current version is much improved from the prior version, with additional details added. Nonetheless, we believe that it still does not include key elements such as scenarios illustrating NextGen operations, a summary of NextGen’s operational impact on users and other stakeholders, and an analysis of the benefits, alternatives, and trade-offs that were considered for NextGen. In addition, it lacks an overall description that ties together the eight key areas that the document covers. As noted, JPDO does plan to release another version of the Concept of Operations later this year. In fact, JPDO plans further versions of all of its key planning documents. We see the development of all three of JPDO’s key documents as part of an iterative and evolutionary process. Thus, it is unlikely that any of these documents will ever be truly “finalized,” but rather will continue to evolve throughout the implementation of NextGen to reflect, for example, the development of new technologies or problems uncovered during research and development of planned technologies. Finally, while each of the three key documents has a specific purpose, the scope and technical sophistication of these documents makes it difficult for some stakeholders to understand the basics of the NextGen planning effort. To address this issue, JPDO is currently drafting what the office refers to as a “blueprint” for NextGen, meant to be a short, high-level, non- technical presentation of NextGen goals and capabilities. We believe that such a document could help some stakeholders develop a better understanding of NextGen and the planning effort to date. In our November 2006 report, we noted that JPDO is fundamentally a planning and coordinating body that lacks authority over the key human and technological resources of its partner agencies. Consequently, institutionalizing the collaborative process with its partner agencies will be critical to JPDO’s ability to facilitate the implementation of NextGen. As we reported in November, JPDO has not established some practices significant to institutionalizing its collaborative process. For example, one method for establishing collaboration at a fundamental level would be for JPDO to have formal, long-term agreements among its partner agencies on their roles and responsibilities in creating NextGen. Currently, there is no mechanism that assures the partner agencies’ commitment continuing over the 20-year timeframe of NextGen or their accountability to JPDO. According to JPDO officials, they are working to establish a memorandum of understanding (MOU), signed by the Secretary or other high-ranking official from each partner agency, which will broadly define the partner agencies’ roles and responsibilities. JPDO first informed us of the development of this MOU in August 2005; in November 2006 we recommended that JPDO finalize the MOU and present it to the senior policy committee for its consideration and action. However, as of March 28, 2007, the MOU remained unsigned by some of the partner agencies. Another key method for institutionalizing the collaborative effort is incorporating NextGen goals and activities into the partner agencies’ key planning documents. For example, we noted in November 2006 that NASA and FAA had incorporated NextGen goals into their strategic plans. These types of efforts will be critical to JPDO’s ability to leverage its partner agency resources for continued JPDO planning efforts. Even more importantly, these efforts will be critical to helping ensure that partner agencies—given competing missions and resource demands—dedicate the resources necessary to support the implementation of NextGen research efforts or system acquisitions. Recognizing that JPDO does not have authority over partner agency resources, FAA and JPDO have initiated several efforts to institutionalize NextGen. For example, JPDO is working with FAA to refocus one of FAA’s key planning documents on the implementation of NextGen—an effort that also appears to be improving the collaboration and coordination between JPDO and FAA’s Air Traffic Organization (ATO), which has primary responsibility for modernization of the air traffic control system. FAA has expanded and revamped its Operational Evolution Plan (OEP)— renamed the Operational Evolution Partnership—to become FAA’s implementation plan for NextGen. The OEP is being expanded to apply to all of FAA and is intended to become a comprehensive description of how the agency will implement NextGen, including the required technologies, procedures, and resources. (Figure 3 shows the OEP framework.) An ATO official told us that the new OEP is to be consistent with JPDO’s key planning documents and its budget guidance to the partner agencies. According to FAA, the new OEP will allow it to demonstrate appropriate budget control and linkage to NextGen plans and will force FAA’s research and development to be relevant to NextGen’s requirements. According to FAA documents, the agency plans to publish a new OEP in June 2007. In addition, to further align FAA’s efforts with JPDO’s plans for NextGen, FAA is creating a NextGen Review Board to oversee the OEP. This Review Board will be co-chaired by JPDO’s Director and ATO’s Vice President of Operations Planning Services. Initiatives, such as concept demonstrations or research, proposed for inclusion in the OEP will now need to go through the Review Board for approval. Initiatives are to be assessed for their relation to NextGen requirements, concept maturity, and risk. An ATO official told us that the new OEP process should also help identify some smaller programs that might be inconsistent with NextGen and which could be discontinued. Additionally, as a further step towards integrating ATO and JPDO, the administration’s reauthorization proposal calls for the JPDO director to be a voting member of FAA’s Joint Resources Council and ATO’s Executive Council. While progress is being made in incorporating NextGen initiatives into FAA’s strategic and planning documents, more remains to be done with FAA and the other JPDO partner agencies. For example, one critical activity that remains in this area will be synchronizing the NextGen enterprise architecture, once JPDO releases and further refines it, with the partner agencies’ enterprise architectures. Doing so should help align agencies’ current work with NextGen while simultaneously identifying gaps between agency plans and NextGen plans. Also, while FAA is making significant progress toward creating an implementation plan for NextGen, the other partner agencies are less far along or have not begun such efforts. JPDO’s lack of authority over partner agency resources will be minimized as a challenge if the partner agencies commit to NextGen goals and initiatives at a structural level. By further incorporation of NextGen efforts into strategic planning documents, the partner agencies will better institutionalize their commitments to JPDO and the NextGen initiative. Finally, another important method for institutionalizing the collaborative effort will be for JPDO to establish mechanisms for leveraging partner agency resources. JPDO has made progress in this area, although further work remains. As we noted in our November report, JPDO is working with OMB to develop a process that would allow OMB to identify NextGen-related projects across the partner agencies and consider NextGen as a unified, cross-agency program. We recently met with OMB officials who said that they felt there has been significant progress with JPDO over the last year. JPDO is now working on an OMB Exhibit 300 form for NextGen. This will allow JPDO to present OMB a joint business case for the NextGen-related efforts within the partner agencies and will be used as input to funding decisions for NextGen research and acquisitions across the agencies. This Exhibit 300 will be due to OMB in September 2007 to inform decisions about the partner agencies’ 2009 budget submissions. Ultimately, the success of JPDO will have to be measured in the efforts of its partner agencies to implement policies and procedures and acquire systems that support NextGen. To date, JPDO can point to its success in collaborating with FAA to fund and speed its rollout of two systems considered cornerstone technologies for NextGen: Automatic Dependent Surveillance-Broadcast (ADS-B) and System Wide Information Management (SWIM). ADS-B is a new air traffic surveillance system that will replace many existing radars with less costly ground-based transceivers. SWIM will provide an initial network centric capability to all the users of the air transportation system. This means that the FAA and the Departments of Homeland Security and Defense will eventually share a common, real-time, secure picture of aviation operations across the airspace system. Identifying such NextGen programs across the partner agencies and establishing implementation plans for them in JPDO’s Integrated Work Plan will be critical going forward to creating performance metrics for JPDO. Although we recommended in our November report that JPDO develop written procedures that formalize agreements with OMB regarding the leveraging of partner agency resources, this is still a work in progress. For example, OMB officials said they had not reviewed JPDO’s 2008 partner agency budget guidance prior to its release to the partner agencies, which highlights the need for JPDO to further develop its procedures for working with OMB. Going forward, it will be important for Congress and other stakeholders to evaluate the success of the 2009 budgets in supporting NextGen initiatives, especially as 2009 is expected to be a critical year in the transition from planning NextGen to implementing NextGen. In our November report, we noted that JPDO had not yet developed a comprehensive estimate of the costs of NextGen. Since then, in its recently released 2006 Progress Report, JPDO reported some estimated costs for NextGen, including specifics on some early NextGen programs. JPDO believes the total federal cost for NextGen infrastructure through 2025 will range between $15 billion and $22 billion. JPDO also reported that a preliminary estimate of the corresponding cost to system users, who will have to equip with the advanced avionics that are necessary to realize the full benefits of some NextGen technologies, produced a range of $14 billion to $20 billion. JPDO noted that this range for avionics costs reflects uncertainty about equipage costs for individual aircraft, the number of very light jets that will operate in high-performance airspace, and the amount of out-of-service time required for installation. FAA, in its capital investment plan for fiscal years 2008-2012, includes estimated expenditures for 11 line items that are considered NextGen capital programs. The total 5-year estimated expenditures for these programs is $4.3 billion. In fiscal year 2008, only 6 of the line items are funded for a total of roughly $174 million; funding for the remaining 5 programs would begin with the fiscal year 2009 budget. According to FAA, in addition to capital spending for NextGen, the agency will spend an estimated $300 million on NextGen-related research and development from fiscal years 2008 through 2012. The administration’s budget for fiscal year 2008 for FAA includes a total of $17.8 million to support the activities of JPDO. While FAA and JPDO have begun to release estimates for FAA’s NextGen investment portfolio, questions remain over which entities will fund and conduct some of the necessary research, development, and demonstration projects that will be key to achieving certain NextGen capabilities. In the past, a significant portion of aeronautics research and development, including intermediate technology development, has been performed by NASA. However, NASA’s aeronautics research budget and proposed funding shows a 30-percent decline, in constant 2005 dollars, from fiscal year 2005 to fiscal year 2011. To its credit, NASA plans to focus its research on the needs of NextGen. However, NASA is also moving toward a focus on fundamental research and away from developmental work and demonstration projects, which could negatively impact NextGen if these efforts are not assumed by others. According to its 2006 Progress Report, JPDO is building a research and development plan that will document NextGen’s research needs and the organizations that will perform the work. For example, JPDO’s investment simulation capability relies heavily on NASA’s NAS-wide modeling platform, the Airspace Concepts Evaluation System (ACES). This investment simulation capability permits JPDO to, among other things, evaluate alternative research ideas and assess the performance of competing vendors. According to a JPDO official, this capability, which is critical to NextGen research, is eroding as JPDO’s investment simulation requirements are expanding. As part of its fundamental research mission, NASA intends to upgrade to ACES-X (a more sophisticated representation of the national airspace system), but not for another two years. Until then, JPDO investment modeling capability will be constrained unless the office or another partner agency can assume the modeling work. While one option would be to contract with private sector vendors to do this type of modeling on a per simulation basis, this solution could be expensive for the government. Moreover, JPDO might not be able to continue facilitating participation by both small and large companies, thus giving both an equal opportunity to demonstrate their ideas, because small companies would have to pay for access to this proprietary modeling capability. This is an issue that needs to be addressed in the short-term. JPDO faces the challenge of determining the nature and scope of the research and technology development necessary to begin the transition to NextGen, as well as identifying the entities that can conduct that research and development. According to officials at FAA and JPDO, they are currently studying these issues and trying to assess how much research and development FAA can assume. An FAA official recently testified that the agency proposes to increase its research and development funding by $280 million over the next 5 years. However, a draft report by an advisory committee to FAA stated that FAA would need at least $100 million annually in increased funding to assume NASA’s research and development work, and establishing the necessary infrastructure within FAA could delay the implementation of NextGen by 5 years. More work remains to completely assess the research and development needs of NextGen and the ability of FAA and the other JPDO partner agencies to budget for and conduct the necessary initiatives. This information is critical as the timely completion of research and testing of proposed NextGen systems is necessary to keeping the NextGen initiative on schedule. Addressing questions about how human factors issues will affect the move to some key NextGen capabilities is another challenge for JPDO. For example, the NextGen Concept of Operations envisions an increased reliance on automation, which raises questions about the role of the air traffic controllers in such an automated environment. Similarly, the Concept of Operations envisions that pilots will take on a greater share of the responsibility for maintaining safe separation and other tasks currently performed by controllers. This raises human factors questions about whether pilots can safely perform these additional duties. Although JPDO has begun to model how shifts in air traffic controllers’ workloads would affect their performance, it has not yet begun to model the effect of how this shift in workload to pilots would affect pilot performance. According to a JPDO official, modeling the effect of changes in pilot workload has not yet begun because JPDO has not yet identified a suitable model for incorporation into its suite of modeling tools. According to a JPDO official, the evolving roles of pilots and controllers is the NextGen initiative’s most important human factors issue, but will be difficult to research because data on pilot behavior are not readily available for use in creating models. In addition to the study of changing roles, JPDO has not yet studied the training implications of various systems or solutions proposed for NextGen. For example, JPDO officials said they will need to study the extent to which new air traffic controllers will have to be trained to operate both the old and the new equipment as the Concept of Operations and enterprise architecture mature. Some stakeholders, such as current air traffic controllers and technicians, will play critical roles in NextGen, and their involvement in planning for and deploying the new technology will be important to the success of NextGen. In November 2006, we reported that active air traffic controllers were not involved in the NextGen planning effort and recommended that JPDO determine whether any key stakeholders and expertise were not represented on its IPTs, divisions, or elsewhere within the office. Since then, the head of the controllers’ union has taken a seat on the Institute Management Council. However, no active controllers are yet participating at the IPT planning level. Also, aviation technicians do not participate in NextGen efforts. Input from current air traffic controllers who have recent experience controlling aircraft and current technicians who will maintain NextGen equipment is important when considering human factors and safety issues. Our work on past air traffic control modernization projects has shown that a lack of stakeholder or expert involvement early and throughout a project can lead to costly increases and delays. In addition, we found that some private sector stakeholders have expressed concerns that participation in the Institute might either preclude bidding on future NextGen acquisitions or pose organizational conflicts of interest. FAA’s acquisition process, generally, precludes bids from organizations that have participated in, materially influenced, or had prior knowledge of the requirements for an acquisition. The Institute was aware of this concern and attempted to address it through an amendment to its governing document that strengthened the language protecting participants from organizational conflicts of interest for participation in the NextGen initiative. However, while the amendment language currently operates to protect stakeholders, the language has never been tested or challenged. Thus, it is unclear at this time whether any stakeholder participation is being chilled by conflict of interest concerns. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions from you or other Members of the Subcommittee. For further information on this testimony, please contact Dr. Gerald L. Dillingham at (202) 512-2834 or dillinghamg@gao.gov. Individuals making key contributions to this statement include Kevin Egan, Colin Fallon, Rick Jorgenson, Faye Morrison, and Richard Scott. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The skies over America are becoming more crowded every day. The consensus of opinion is that the current system cannot be expanded to meet projected growth. In 2003, recognizing the need for system transformation, Congress authorized the creation of the Joint Planning and Development Office (JPDO), housed within the Federal Aviation Administration (FAA), to lead a collaborative effort of federal and nonfederal aviation stakeholders to conceptualize and plan the Next Generation Air Transportation System (NextGen)--a fundamental redesign and modernization of the national airspace system. JPDO operates in conjunction with its partner agencies, which include FAA; the Departments of Transportation, Commerce, Defense, and Homeland Security; the National Aeronautics and Space Administration (NASA); and the White House Office of Science and Technology Policy. GAO's testimony focuses on the progress that JPDO has made in planning the NextGen initiative and some key issues and challenges that JPDO continues to face. This statement is based on GAO's November 2006 report to this subcommittee as well as ongoing work. In our November 2006 report, we recommended that JPDO take actions to institutionalize its collaboration and determine if it had the involvement of all key stakeholders. JPDO said it would consider our recommendations. JPDO has made progress in several areas in its planning of the NextGen initiative, but continues to face a number of challenges. JPDO's organizational structure incorporates some of the practices that we have found to be effective for federal interagency collaborations, and includes an institute that facilitates the participation of nonfederal stakeholders. JPDO has faced some organizational challenges, however. Leadership turnover at JPDO and the Institute have raised concerns about the stability of JPDO and the impact of the turnovers on its progress. Additionally, we and JPDO officials have heard concerns from stakeholders about the productivity of some integrated product teams and the pace of the planning effort. In response, JPDO officials are currently proposing several changes to JPDO's organizational structure aimed at improving the organization's effectiveness. JPDO has also made progress toward releasing several key planning documents, including a Concept of Operations, an Enterprise Architecture, and an Integrated Work Plan, although in some cases on a revised and extended timeline. JPDO is focusing on the right types of key documents for the foundation of NextGen planning, although the current draft Concept of Operations still lacks important details. In our November 2006 report, we noted that JPDO is fundamentally a planning and coordinating body that lacks authority over the key human and technological resources of its partner agencies. Consequently, institutionalizing the collaborative process with its partner agencies will be critical to JPDO's ability to facilitate the implementation of NextGen. JPDO has identified several tasks including aligning the enterprise architectures of its partner agencies, working with OMB to establish a cross-agency mechanism for NextGen funding decisions, and working with FAA to revamp a key planning document to focus on the NextGen effort. JPDO has made progress in developing cost estimates for NextGen, recently reporting that it estimates the total federal cost for NextGen infrastructure through 2025 will range between $15 billion and $22 billion. Questions remain, however, over which entities will fund and conduct some of the necessary research, development, and demonstration projects that in the past were often conducted by NASA, and which will be key to achieving certain NextGen capabilities. For example, JPDO's investment simulation capability, which relies heavily on a NASA modeling platform, may be constrained unless the JPDO or another partner agency can assume the modeling work. JPDO also faces a challenge in addressing questions concerning how human factors issues, such as the changing roles of air traffic controllers in a more automated NextGen environment, will be researched and addressed. Finally, JPDO has a continuing challenge in ensuring the involvement of all key stakeholders, including controllers and technicians. Similarly, issues have arisen over whether conflict of interest issues could chill the participation of industry stakeholders.
DOD has taken some steps to implement internal safeguards to help ensure that the NSPS performance management system is fair, effective, and credible; however, we believe continued monitoring of safeguards is needed to help ensure that DOD’s actions are effective as implementation proceeds. Specifically, we reported in September 2008 that DOD had taken some steps to (1) involve employees in the system’s design and implementation; (2) link employee objectives and the agency’s strategic goals and mission; (3) train and retrain employees in the system’s operation; (4) provide ongoing performance feedback between supervisors and employees; (5) better link individual pay to performance in an equitable manner; (6) allocate agency resources for the system’s design, implementation, and administration; (7) provide reasonable transparency of the system and its operation; (8) impart meaningful distinctions in individual employee performance; and (9) include predecisional internal safeguards to determine whether rating results are fair, consistent, and equitable. For example, all 12 sites we visited trained employees on NSPS, and the DOD-wide tool used to compose self- assessments links employees’ objectives to the commands’ or agencies’ strategic goals and mission. However, we determined that DOD could immediately improve its implementation of three safeguards. First, DOD’s implementation of NSPS does not provide employees with adequate transparency over their rating results because it does not require commands or pay pools to publish their respective ratings and share distributions to employees. According to DOD, distributing aggregate data to employees is an effective means for providing transparency, and NSPS program officials at all four components’ headquarters told us that publishing overall results is considered a best practice. In addition, 3 of the 12 sites we visited decided not to publish the overall final rating and share distribution results. Without transparency over rating and share distributions, employees may believe they are not being rated fairly, which ultimately can undermine their confidence in the system. To address this finding, we recommended that DOD require overall final rating results to be published. DOD concurred with this recommendation and, in 2008, revised its NSPS regulations and guidance to require commands to publish the final overall rating results. Second, NSPS guidance may discourage rating officials from making meaningful distinctions in employee performance because this guidance emphasized that most employees should be evaluated at “3” (or “valued performer”) on a scale of 1 to 5. According to NSPS implementing issuance, rating results should be based on how well employees complete their job objectives using the performance indicators. Although DOD and most of the installations we visited emphasized that there was not a forced distribution of ratings, some pay pool panel members acknowledged that there was a hesitancy to award employee ratings in categories other than “3.” Unless NSPS is implemented in a manner that encourages meaningful distinctions in employee ratings in accordance with employees’ performance, there will be an unspoken forced distribution of ratings, and employees’ confidence in the system may be undermined. As a result, we recommended that DOD encourage pay pools and supervisors to use all categories of ratings as appropriate. DOD partially concurred with this recommendation, and in April 2009, DOD issued additional guidance prohibiting the forced distribution of ratings under NSPS. Third, DOD does not require a third party to analyze rating results for anomalies prior to finalizing ratings. To address this finding, GAO recommended that DOD require predecisional demographic and other analysis; however, DOD did not concur, stating that a postdecisional analysis is more useful. Specifically, in commenting on our prior report, DOD stated that its postdecisional analysis of final rating results by demographics was sufficient to identify barriers and corrective actions. We are currently assessing DOD’s postdecisional analysis approach as part of our ongoing review of the implementation of NSPS. Although DOD civilian employees under NSPS responded positively regarding some aspects of the NSPS performance management system, DOD does not have an action plan to address the generally negative employee perceptions of NSPS identified in both the department’s Status of Forces Survey of civilian employees and discussion groups we held at 12 select installations. According to our analysis of DOD’s survey from May 2007, NSPS employees expressed slightly more positive attitudes than their DOD colleagues who remain under the General Schedule system about some goals of performance management, such as connecting pay to performance and receiving feedback regularly. For example, an estimated 43 percent of NSPS employees compared to an estimated 25 percent of all other DOD employees said that pay raises depend on how well employees perform their jobs. However, in some instances, DOD’s survey results showed a decline in employee attitudes among employees who have been under NSPS the longest. Employees who were among the first employees converted to NSPS (designated spiral 1.1) were steadily more negative about NSPS from the May 2006 to the May 2007 DOD survey. At the time of the May 2006 administration of the Status of Forces Survey for civilians, spiral 1.1 employees had received training on the system and had begun the conversion process, but had not yet gone through a rating cycle and payout under the new system. As part of this training, employees were exposed to the intent of the new system and the goals of performance management and NSPS, which include annual rewards for high performance and increased feedback on employee performance. As DOD and the components proceeded with implementation of the system, survey results showed a decrease in employees’ optimism about the system’s ability to fulfill its intent and reward employees for performance. The changes in attitude reflected in DOD’s employee survey are slight but indicate a movement in employee perceptions. Most of the movement in responses was negative. Specifically, in response to a question about the impact NSPS will have on personnel practices at DOD, the number of positive responses decreased from an estimated 40 percent of spiral 1.1 employees in May 2006 to an estimated 23 percent in May 2007. Further, when asked how NSPS compared to previous personnel systems, an estimated 44 percent said it was worse in November 2006, compared to an estimated 50 percent in May 2007. Similarly, employee responses to questions about performance management in general were also more negative from May 2006 to May 2007. Specifically, the results of the May 2006 survey estimated that about 67 percent of spiral 1.1 employees agreed that the performance appraisal is a fair reflection of performance, compared to 52 percent in May 2007. Further, the number of spiral 1.1 employees who agreed that the NSPS performance appraisal system improves organizational performance decreased from an estimated 35 percent to 23 percent. Our discussion group meetings gave rise to views consistent with DOD’s survey results. Although the results of our discussion groups are not generalizable to the entire population of DOD civilians, the themes that emerged from our discussions provide valuable insight into civilian employees’ perceptions about the implementation of NSPS and augment DOD’s survey findings. Some civilian employees and supervisors under NSPS seemed optimistic about the intent of the system however, most of the DOD employees and supervisors we spoke with expressed a consistent set of wide-ranging concerns. Specifically, employees noted (1) NSPS’s negative effect on employee motivation and morale, (2) the excessive amount of time and effort required to navigate the performance management process, (3) the potential influence that employees’ and supervisors’ writing skills have on panels’ assessments of employee ratings, (4) the lack of transparency and understanding of the pay pool panel process, and (5) the rapid pace at which the system was implemented, which often resulted in employees feeling unprepared and unable to find answers to their questions. These negative attitudes are not surprising given that organizational transformations often entail fundamental and radical change that requires an adjustment period to gain employee acceptance and trust. To address employee attitudes and acceptance, OPM issued guidance that recommends—and we believe it is a best practice—that agencies use employee survey results to provide feedback to employees and develop and implement an action plan that guides their efforts to address the results of employee assessments. However, according to Program Executive Office officials, DOD has not developed a specific action plan to address critical issues identified by employee perceptions, because the department wants employees to have more time under the system before making changes. Without such a plan, DOD is unable to make changes that address employee perceptions that could result in greater employee acceptance of NSPS. We therefore recommended, in our September 2008 report, that DOD develop and implement a specific action plan to address employee perceptions of NSPS ascertained from DOD’s surveys and employee focus groups. The plan should include actions to mitigate employee concerns about, for example, the potential influence that employees’ and supervisors’ writing skills have on the panels’ assessment of employee ratings or other issues consistently identified by employees or supervisors. DOD partially concurred with our recommendation, noting that it will address areas of weakness identified in its comprehensive, in- progress evaluation of NSPS and that it is institutionalizing a continuous improvement strategy. Since our 2008 review, NSPS officials at DOD have told us that they are working on an action plan; however, to date the department has not provided us a plan for review. DOD’s implementation of a more performance- and results-based personnel system has positioned the agency at the forefront of a significant transition facing the federal government. We recognize that DOD faces many challenges in implementing NSPS, as any organization would in implementing a large-scale organizational change. NSPS is a new program, and organizational change requires time for employees to accept. Continued monitoring of internal safeguards is needed to help ensure that DOD’s actions are effective as implementation proceeds. Moreover, until DOD develops an action plan and takes specific steps to mitigate negative employee perceptions of NSPS, DOD civilian employees will likely continue to question the fairness of their ratings and lack confidence in the system. The degree of ultimate success of NSPS largely depends on the extent to which DOD incorporates internal safeguards and addresses employee perceptions. Moving forward, we hope that the Defense Business Board considers our previous work on NSPS as it assesses how NSPS operates and its underlying policies. This concludes my prepared statement. I would be happy to respond to any questions that you or members of the subcommittee may have at this time. For further information about this statement, please contact Brenda S. Farrell, Director, Defense Capabilities and Management, at (202) 512-3604, or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this statement include Marion Gatling (Assistant Director), Lori Atkinson, Renee Brown, and Lonnie McAllister. The Departmet of Defense (DOD) i the rocess of imlemeting it ew hanapitsystem for managng civilian ernnel—the NtionaSecrit Pernnel System (NSPS). Ke com of NSPSclde comnsatio, classifictio, anerformance managemet. Imlemetio of NSPS cold hve fr-reching imlictions, ot just for DOD, but for civil ervice reform cross the federovermet. A of Feua 2009, abt 20,000 civilian emloee were under NSPS. Based o GAO’s rior work reviewing erformance managemet i the public ector, GAO develoed anitil lit of safeguard tht NSPS hold iclde to ensure it iir, effective, and credile. I 2008, Cngress directed GAO to evuate, mong other things, the etet DOD imlemeted ccounabilit mechan, iclding thoe i U.S.C. ectio 9902()(7) and other iternasafeguard NSPS. While DOD haske ome teps to imlemet iternasafeguard to ensure tht NSPSir, effective, and credile, ite 2008, GAO found tht the imlemetio of three safeguard cold e imroved. Firt, DOD doe ot reqire third part to anaze rting result for anomlie rior to finalizingtings, and thus it doe ot hve rocess to determie whether rtings re dicriminator efore the re finalized. Witho redeciionaanays, emloeeayck cofidece i the firss and crediilit of NSPS. To ddress thi fiding, GAO recommeded tht DOD reqire redeciional demoaphic and other anays; however, DOD did ot cor, ting th tdeciionaanays more usefl. GAO cotinu to elieve thi recommetioas merit. Secod, the rocessck transpare ecause DOD doe ot reqire comman to publih final rting ditributions, though doing o i recognized as ctice by DOD. Withot transpare over rting ditributions, emloeeay ot elieve the re eingted firl. To ddress thi fiding, GAO recommeded tht DOD reqire publictio of overll final rting result. DOD corred with thi recommetio and i 2008 revied it guidance to reqire such publictio. Third, NSPS guidance may dicoage rting offici from mking meanngl ditictions emloee rtings ecause it idicted tht the mjorit of emloee hold e rted t the “” level, o le of 1 to , resulting heitan to rd rtings other cteorie. Uless imlemetio of NSPScoag meanngl ditictions emloee erformance, emloeeay elieve there i an unspoke forced ditributio of rtings, and their cofidece i the system will undermied. To ddress thi fiding, GAO recommeded tht DOD ecoagpay ool ansupervior to usll cteorie of rtings as approrite. DOD partill corred with thi recommetio, but has ot et tke any ctio to imlemet it. Thi temet i based o GAO’s Stemer 2008 reort, which determied (1) the etet to which DOD has imlemeted iternasafeguard to ensure NSPSasir, effective, and credile; and (2) how DOD civilians erceive NSPS and whctions DOD haske to ddress theercetions. For tht reort, GAO anazed relevant docme and emloee surve result; iterviewed approrite offici; and cocted diussio roups t 12 elected inslltions. GAO recommeded ways to etter ddress the safeguard and emloee ercetions. View GAO-09-464T or key components. For more information, contact Brenda S. Farrell at (202) 512-3604 or farrellb@gao.gov. Although DOD emloee under NSPS respded itivel regarding ome aspect of erformance managemet, DOD doe ot hve an ctio an to ddress the erll gative emloee ercetions of NSPS. According to DOD’s surve of civilian emloee, erll emloee under NSPS re itive abome aspect of erformance managemet, such as connecting pay to erformance. However, emloee who hd the mot experiece under NSPS howed gative movemet i their ercetions. For exale, the ercet of NSPS emloee who elieve tht NSPS will hve itive effect o DOD’s ernnel ctice declied from antimted 40 ercet i 2006 to 23 ercet i 2007. Some gative ercetions o emered dring diussio roups tht GAO held. For exale, emloee ansupervior were cocered abt the ecessive mount of time reqired to navigate the rocess. While it i reasnable for DOD to llow emloee ome time to ccet NSPS, ot ddressng ertegative emloee ercetions cold jeopardize emloee cceance ansuccessl imlemetio of NSPS. A result, GAO recommeded tht DOD develo and imlemean ctio an to ddress emloee cocerns abt NSPS. DOD partill corred with GAO’s recommetio, but has ot et develoed an ctio an. GAO i recommeding tht DOD imrove the imlemetio of ome safeguard and develo and imlemean ctio an to ddress emloee cocerns abt NSPS. DOD erll corred with or recommetions, with the ecetio of oe reqiring redeciional review of rtings. To view the full product, including the scope and methodology, click on GAO-08-773. For more information, contact Brenda S. Farrell at (202) 512-3604 or farrellb@gao.gov. Although DOD emloee under NSPS re itive regarding ome aspect of erformance managemet, DOD doe ot hve an ctio an to ddress the erll gative emloee ercetions of NSPS. According to DOD’s surve of civilian emloee, emloee under NSPS re itive abome aspect of erformance managemet, such as connecting pay to erformance. However, emloee who hd the mot experiece under NSPS howed gative movemet i their ercetions. For exale, the ercet of NSPS emloee who elieve tht NSPS will hve itive effect o DOD’s ernnel ctice declied from 40 ercet i 2006 to 23 ercet i 2007. Negative ercetions o emered dring diussio roups tht GAO held. For exale, emloee ansupervior were cocered abt the ecessive mount of time reqired to navigate the rocess. Although the Office of Pernnel Managemet issued guidance recommeding thagcie use emloee surve result to rovide feedback to emloee and imlemean ctio an to guide their effort to ddress emloee assssme, DOD has ot develoed an ctio an to ddress emloee ercetions. While it i reasnable for DOD to llow emloee ome time to ccet NSPS ecause organiztional chang ofte reqire time to djust, it i det to ddress ertegative emloee ercetions. Withosuch an, DOD i unable to mke chang tht cold result i reter emloee cceance of NSPS. Give e-le organiztional change iititive, such as the Departmet of Defens’s (DOD) NtionaSecrit Pernnel System (NSPS), i subsantil commitmet tht will tke to comlete, it i imortant tht DOD anCngress e ket iformed of the fll cot of imlemeting NSPS. Uder the Comtroller Geer’s authorit to coct evuations hi owititive, GAO anazed the etet to which DOD has (1) flltimted totl co associted with the imlemetio of NSPS and (2) expded or oligated fun to degn and imlemet NSPS through fir 2006. GAO iterviewed departmet offici ananazed the NSPS Prom Eective Office’s (PEO), and the milit ervices’ and the Washingto Hedquarter Services’ (herefter referred to as the com) cot etimte and reort of expded and oligated fun. DOD’s Novemer 200timte tht it will cot $18 millio to imlemet NSPS doe ot iclde the fll cot tht the departmet expect to ias result of imlemeting the ew system. Federl financiccounting anrd te tht reliable iformtio the co of federro anctivitie crcil for effective managemet of overmet oertions and recommed tht fll co of ro and their opu rovided to assCngress and eectivekingformed deciions rom rerce and to ensure thro et expected and efficiet result. The fll cot iclde oth thoe co specificll idetifiable to crrt the rom, or direct co, and thoe co thre commo to mltile ro but cannot specificll idetified with any particrom, or idirect co. While the anrd emasize tht fll cot iformtiosstil for managng federro, their ctivitie, and opu, the anrd rovide tht itemay e omitted from cot iformtio if tht omissio wold ot change or iflce the jmet of reasnable er relng the cot iformtio. Based o GAO’s review of docmetio rovided by DOD and diussions with departmet offici, GAO found tht DOD’stimte iclde ome direct co, such as the rt-up and oertio of the NSPS PEO and the develomeand deliver of ew NSPS trng co, but it doe ot iclde other direct co such as the fll sa co of ll civilian and milit ernnel who directl support NSPS ctivitie departmetwide. Before develong ittimte, DOD hot fll defied ll the direct and idirect co eeded to manage the rom. Witho etter etimte, deciioker—withi DOD anCngress—will ot hve comlete iformtio abt whether dequate rerce re eing rovided for imlemeting NSPS. GAO recomme tht DOD defill co eeded to manage NSPS, repare revied etimte of thoe co for imlemeting the system i ccordance with federl financiccounting anrd, and develo comrehensive overht frmework to ensure thll funxpded or oligated to degn and imlemet NSPS re fllapred and reorted. I reviewing drft of thi reort, DOD erll corred with GAO’s recommetions. www.gao.gov/cgi-bin/getrpt?GAO-07-851. The totmount of fun DOD hasxpded or oligated to degn and imlemet NSPSring fi through 2006 cannot e determied ecause DOD has ot eablihed an overht mechanm to ensure tht thee co re fllapred. I May, the NSPS Sior Eective eablihed guidance for trcking and reorting NSPS imlemetio co tht reqire the com to develo mechan to capre thee co and to reort quarterl their co to the NSPS PEO. However, thi guidance doe ot defie the direct and idirect co DOD reqire tht the comapre. DOD’s ervasive financil managemet deficiecieve ee the bas for GAO’s degnatio of thi as hih-rire ce 199. GAO’s review of submitted reort from the com found tht their officiccounting system do ot capre the totl funxpded or oligated to degn and imlemet NSPS. Withoan effective overht mechanm to ensure tht the officiccounting systemapre ll approrite co, DOD anCngress do ot hve viilit over the ctual cot to degn and imlemet NSPS. To view the full product, including the scope and methodology, click on the link above. For more information, contact Derek Stewart at (202) 512-5559 or stewartd@gao.gov. Peole re criticl to any ag transformtio ecause the defian agy’sltre, develo itowledbase, romote innovtio, anre it mot imortanasset. Thus, trteic hanapitl managemet the Departmet of Defense (DOD) can hel it ml, manage, and m the eole ankill eeded to meet it criticl missio. I Novemer 2003, Cngress rovided DOD with gnificant fleilit to degn moderan rerceanagemesystem. O Novemer 1, 200, DOD and the Office of Pernnel Managemet (OPM) joitl released the final regutions DOD’s ew han rerceanagemesystem, kow as the NtionaSecrit Pernnel System (NSPS). GAO elieve tht DOD’s final NSPS regutions coany of the basic ricile thre constet with rove approche to trteic hanapitl managemet. For insance, the final regutions rovide for (1) flele, cotemor, mrket-based anerformance-orieted comnsatio system—such as pay ban anpay for erformance; (2) iving reter riorit to emloee erformance i it retetio deciions connectio with workforce rihtizing and redctions-i-force; and (3) ivolvemet of emloee reretive throughot the imlemetio rocess, such asvingpportunitie to participate i develong the imlemetingssuance. However, fre ctions will determie whether such labor reltions effort will e meanngand credile. Severl moth ago, with the release of the roed regutions, GAO observed thome part of the han rerceanagemesystem red qtions for DOD, OPM, anCngress to consider i the reas of pay anerformance anagemet, dverctions anapp, and labor managemet reltions. GAO o idetified mltile imlemetio chlleng for DOD oce the final regutions for the ew system were issued. Despite theitive aspect of the regutions, GAO has everreas of cocer. Firt, DOD has considerable work hed to defie the imortant detil for imlemeting it system—such as how emloee erformance expecttions will ligned with the departme’s overll missio an and other measure of erformance, and how DOD wold romote conste anrovide erl overht of the erformance managemesystem to ensure it i dmitered i ir, credile, transparet manner. Theand other criticll imortant detiluse defied i counctio with applicable keholder. Secod, the regutions merel llow, rther than reqire, the use of core cometecie tht can hel to rovide conste and clerl communicte to emloee wht ixpected of them. Third, lthough the regutions do rovide for cotinung collabortio with emloee reretive, the do ot idetif rocess for the cotinungvolvemet of idividual emloee the imlemetio of NSPS. Thi tetimony rovide GAO’s overll observtions elected roviions of the final regutions. Going forwrd, GAO elieve tht (1) DOD wold efit from develong comrehensive communictions trtegy, (2) DOD must ensure tht it has the ecessanstittional ifrastrctre i ce to mke effective use of it ew authoritie, (3) chief managemet officer or imilitiosstil to effectivel rovide sused and committed lederhi to the departme’s overll busss transformtio effort, iclding NSPS, and (4) DOD hold develo rocedre and method to iitite imlemetio effort relting to NSPS. www.gao.gov/cgi-bin/getrpt?GAO-06-227T. To view the full product, including the scope and methodology, click on the link above. For more information, contact Derek B. Stewart at (202) 512-5559 or stewartd@gao.gov. While GAO trong supportanapitl reform i the federovermet, how it i doe, whe it i doe, and the bas which it i doe canke ll the differece i whether such effort re successl. DOD’s regutions re especill criticaneed to e imlemeted roerl ecause of their otetil imlictions for relted overmetwide reform. I thi regard, ir view, classifictio, comnsatio, criticl hiring, and workforce retrctring reform hold pusued o overmetwide bas efore anpate from any rod-based labor- managemet or drocess reform. The Departmet of Defens’s (DOD) ew ernnel systemthe NtionaSecrit Pernnel System (NSPS)will hve fr- reching imlictions ot just for DOD, bt for civil ervice reform cross the federovermet. The Ntional Defense Athoriztio Act for Fil Yer 2004 gave DOD gnificanauthoritie to redegn the rle, regutions, anrocess thover the way tht more than 700,000 defense civilian emloee re hired, comnsated, romoted, and dicilied. I dditio, NSPS cold erve as model for overmetwide transformtioanapitl managemet. However, if ot roerl degned and effectivel imlemeted, it cold everel imede roress towrd more erformance- and result-based system for the federovermeas whole. DOD’srrerocess to degn it ew ernnel managemesystem cons of foag: (1) develomet of degntions, (2) assssmet of degntions, (3) issuance of roed regutions, and (4) tor public commeeriod, meet and cofer eriod with emloee reretive, an congressionaotifictio eriod. DOD’sitil degn rocessas unrelitic and inapprorite. However, fter trteic reassssmet, DOD djusted it approch to reflect more cautious and delibertive rocess tht ivolved more keholder. Thi reort (1) decribe DOD’s rocess to degn it ew ernnel managemesystem, (2) anaze the etet to which DOD’s rocess reflect ke ctice for successl transformtions, and (3) idetifie the mognificant chlleng DOD fce imlemeting NSPS. DOD’s NSPS degn rocess erll reflect for of elected ke ctice for successl organiztional transformtions. Firt, DOD and OPM hve develoed rocess to degn the ew ernnel system tht i supported b to lederhi both organiztions. Secod, from the oet, et of guiding ricile and ke erformance pameterve guided the NSPS degn rocess. Third, DOD has dedicted tem i ce to degn and imlemet NSPS and manage the transformtio rocess. Forth, DOD hasblihed timelie, lbeit mbitious, and imlemetio . The degn rocess, however, icking two other ctice. Firt, DOD develoed and imlemeted writte communictio trtegy docmet, bt the trtegy ot comrehensive. It doe ot idetif ll keternakeholder and their cocerns, and doe ot tilor ke messag to specific keholder roups. Filre to dequatel consider wide vriet of eole and cltl issuan led to unsuccessl transformtions. Secod, while the rocessasvolved emloee through towll meetings and other mechan, it has ot iclded emloee reretive the working roups tht drfted the degntions. It hold be oted tht 10 federl lbor unionsve filed suit lleng tht DOD filed to bide b the tor reqireme to iclde emloee reretive the develomet of DOD’s ew lbor reltions system authorized as part of NSPS. A successl transformtiousrovide for meanngl ivolvemet b emloee and their reretive to ga their input ito anunderanding of the chang tht will occr. GAO iking recommetions to imrove the comrehensivess of the NSPS communictio trtegy and to evuate the impact of NSPS. DOD did ot cor with oe recommetio anpartill corred with two other. www.gao.gov/cgi-bin/getrpt?GAO-05-730. To view the full product, including the scope and methodology, click on the link above. For more information, contact Derek B. Stewart at (202) 512-555 or stewartd@gao.gov. DOD will fce mltile imlemetio chlleng. For exale, i dditio to the chlleng of cotinung to ivolve emloee and other keholder anroviding dequate rerce to imlemet the system, DOD fce the chlleng of ensuring an effective, ongoing two-way communictio trtegy and evuating the ew system. I recet tetimony, GAO ted tht DOD’s communictio trtegyust iclde the ctive and viible ivolvemet of number of ke ayer, iclding the Secret of Defense, for successl imlemetio of the system. Moreover, DOD must ensure sused and committed lederhi fter the system ill imlemeted and the NSPS Sior Eective and the Prom Eective Office transitiot of etece. To rovide sused lederhi ttetio to ange of busss transformtioititive, like NSPS, GAO recetl recommeded the cretio of chief managemet officit DOD. The federovermet i eriod of rofound transitio and fce an rray of chlleng and opportunitie to eance erformance, ensure ccounbilit, anitio the natio for the fre. Hih- erforming organiztionsve found tht to successll transform themelve, theust ofteunmell change their cltre o tht the re more result-orieted, customer-focused, and collbortive i nare. To foter such cltre, thee organiztions recognize than effective erformance managemesystem can be trteic tool to drive iternal changanchieve deired result. Pblic ector organiztions both i the Uited Ste anbrod hve imlemeted elected, erll consteet of ke ctice for effective erformance managemet tht collectivel crete cler liage— “lie of ht”—betweedividuaerformance and organiztionasuccess. Thee ke cticeclde the following. 1. Align individual performance expectation with organizational goal. Axplicit lignmet helpsdividua ee the connectio betwee their dil ctivitie and organiztiona. 2. Connect performance expectation to crosscutting goal. Plcing an emas collbortio, iterctio, and temwork cross organiztional bounrie helps trengthe ccounbilit for result. 3. Provide and routinely ue performance information to track organizational prioritie. Idividua userformance iformtio to manage dring the r, idetif erformance gaps, annpoit imrovemet opportunitie. Based o reviousssued reort public ector organiztions’ approche to reiforce idividuaccounbilit for result, GAO idetified ke ctice tht federagciean consider as the develo moder, effective, and credible erformance managemesystem. 4. Require follow-up action to address organizational prioritie. B reqiring and trcking follow-up ctions erformance gaps, organiztions undercore the imortance of holdingdividua ccounble for mking roress their rioritie. 5. Ue competencie to provide a fuller assssment of performance. Cometecie defie the kill ansupporting behvior tht idividua eed to effectivel cotribte to organiztional result. . Link pay to individual and organizational performance. Pay, cetive, and rewrd system tht lik emloee kowlede, kill, and cotribtions to organiztional result re based olid, relible, and transpareerformance managemesystem with dequate safeguard. 7. Make meaningful ditinction in performance. Effective erformance managemesystem trive to rovide candid and constrctive feedbck and the ecessa objective iformtio and docmetio to rewrd to erformer and del with oor erformer. www.gao.gov/cgi-bin/getrpt?GAO-03-488. 8. Involve employee and takeholder to gain ownerhip of performance management tem. Erl and direct ivolvemet helpscrease emloees’ ankeholders’ underanding and owerhi of the system and belief i itirss. To view the full report, including the scope and methodology, click on the link above. For more information, contact J. Christopher Mihm at (202) 512-6806 or mihmj@gao.gov. 9. Maintain continuity during tranition. Because cltl transformtionske time, erformance managemesystem reiforce ccounbilit for change managemeand other organiztiona. Post-Hearing Questions for the Record Related to the Department of Defense’s National Security Personnel System (NSPS). GAO-06-582R. Washington, D.C.: March 24, 2006. Human Capital: Designing and Managing Market-Based and More Performance-Oriented Pay Systems. GAO-05-1048T. Washington, D.C.: September 27, 2005. Questions for the Record Related to the Department of Defense’s National Security Personnel System. GAO-05-771R. Washington, D.C.: June 14, 2005. Questions for the Record Regarding the Department of Defense’s National Security Personnel System. GAO-05-770R. Washington, D.C.: May 31, 2005. Post-hearing Questions Related to the Department of Defense’s National Security Personnel System. GAO-05-641R. Washington, D.C.: April 29, 2005. Human Capital: Selected Agencies’ Statutory Authorities Could Offer Options in Developing a Framework for Governmentwide Reform. GAO-05-398R. Washington, D.C.: April 21, 2005. Human Capital: Preliminary Observations on Proposed Regulations for DOD’s National Security Personnel System. GAO-05-559T. Washington, D.C.: April 14, 2005. Human Capital: Preliminary Observations on Proposed Department of Defense National Security Personnel System Regulations. GAO-05-517T. Washington, D.C.: April 12, 2005. Human Capital: Preliminary Observations on Proposed DOD National Security Personnel System Regulations. GAO-05-432T. Washington, D.C.: March 15, 2005. Human Capital: Principles, Criteria, and Processes for Governmentwide Federal Human Capital Reform. GAO-05-69SP. Washington, D.C.: December 1, 2004. Human Capital: Implementing Pay for Performance at Selected Personnel Demonstration Projects. GAO-04-83. Washington, D.C.: January 23, 2004. Human Capital: Building on DOD’s Reform Efforts to Foster Governmentwide Improvements. GAO-03-851T. Washington, D.C.: June 4, 2003. Human Capital: DOD’s Civilian Personnel Strategic Management and the Proposed National Security Personnel System. GAO-03-493T. Washington, D.C.: May 12, 2003. Defense Transformation: DOD’s Proposed Civilian Personnel System and Governmentwide Human Capital Reform. GAO-03-741T. Washington, D.C.: May 1, 2003. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
DOD is in the process of implementing this human capital system, and according to DOD, about 212,000 civilian employees are currently under the system. On February 11, 2009, however, the House Armed Services Committee and its Subcommittee on Readiness asked DOD to halt conversions of any additional employees to NSPS until the administration and Congress could properly address the future of DOD's personnel management system. On March 16, 2009, DOD and the Office of Personnel Management (OPM) announced an upcoming review of NSPS policies, regulations, and practices. According to DOD, the department has delayed any further transitions of employees into NSPS until at least October 2009--pending the outcome of its review. Furthermore, on May 14, 2009, the Deputy Secretary of Defense asked the Defense Business Board to form what has become this task group to review NSPS to help the department determine, among others things, whether NSPS is operating in a fair, transparent, and effective manner. This statement focuses on the performance management aspect of NSPS specifically (1) the extent to which DOD has implemented internal safeguards to ensure the fairness, effectiveness, and credibility of NSPS and (2) how DOD civilian personnel perceive NSPS and what actions DOD has taken to address these perceptions. It is based on the work we reported on in our September 2008 report, which was conducted in response to a mandate in the National Defense Authorization Act for Fiscal Year 2008. This mandate also directed us to continue examining DOD efforts in these areas for the next 2 years. We currently have ongoing work reviewing the implementation of NSPS for the second year, and we also will perform another review next year. DOD has taken some steps to implement internal safeguards to help ensure that the NSPS performance management system is fair, effective, and credible; however, we believe continued monitoring of safeguards is needed to help ensure that DOD's actions are effective as implementation proceeds. Specifically, we reported in September 2008 that DOD had taken some steps to (1) involve employees in the system's design and implementation; (2) link employee objectives and the agency's strategic goals and mission; (3) train and retrain employees in the system's operation; (4) provide ongoing performance feedback between supervisors and employees; (5) better link individual pay to performance in an equitable manner; (6) allocate agency resources for the system's design, implementation, and administration; (7) provide reasonable transparency of the system and its operation; (8) impart meaningful distinctions in individual employee performance; and (9) include predecisional internal safeguards to determine whether rating results are fair, consistent, and equitable. For example, all 12 sites we visited trained employees on NSPS, and the DOD-wide tool used to compose self-assessments links employees' objectives to the commands' or agencies' strategic goals and mission. However, we determined that DOD could immediately improve its implementation of three safeguards. Although DOD civilian employees under NSPS responded positively regarding some aspects of the NSPS performance management system, DOD does not have an action plan to address the generally negative employee perceptions of NSPS identified in both the department's Status of Forces Survey of civilian employees and discussion groups we held at 12 select installations. According to our analysis of DOD's survey from May 2007, NSPS employees expressed slightly more positive attitudes than their DOD colleagues who remain under the General Schedule system about some goals of performance management, such as connecting pay to performance and receiving feedback regularly. For example, an estimated 43 percent of NSPS employees compared to an estimated 25 percent of all other DOD employees said that pay raises depend on how well employees perform their jobs.
Treasury has continued to focus on CPP, but a variety of other programs have been created or are in progress, as shown in table 1. As of March 5, 2009, Treasury had disbursed almost 80 percent of the $250 billion it had allocated for CPP to purchase almost $197 billion in preferred shares of 467 qualified financial institutions (table 1). Treasury also has begun to receive dividend payments relating to capital purchases under CPP and other programs. According to Treasury, as of February 17, 2009, it had received about $2.4 billion. Initially, Treasury approved $125 billion in capital purchases for nine of the largest public financial institutions that federal banking regulators and Treasury considered to be systemically significant to the operation of the financial system. At the time, these nine institutions held about 55 percent of U.S. banking assets. Subsequent purchases were made in qualified institutions of various sizes (in terms of total assets) and types. As we noted in our January report, most of the institutions that received CPP capital were publicly held institutions, although a limited number of privately held institutions and community development financial institutions (CDFI) also received funds. Treasury has taken a number of important steps toward better reporting on and monitoring of CPP. These steps are in keeping with our prior recommendations that Treasury bolster its ability to determine whether institutions are using CPP proceeds in ways that are consistent with the act’s purposes and establish mechanisms to monitor compliance with program requirements. However, Treasury needs to take further steps in this area. Treasury has done an initial survey of the largest institutions to monitor their lending and other activities and announced plans to analyze quarterly monitoring data (call reports) for all reporting institutions. In addition, Treasury is developing a more limited monthly survey of lending by smaller institutions participating in the program. These efforts are important steps toward ensuring that all participating institutions are held accountable for their use of the funds and are consistent with our past recommendation that Treasury seek similar information from existing CPP participants.. We will continue to monitor Treasury’s oversight efforts as well as the consistency of the approval process in future work. Treasury has also continued to take steps to increase its planned oversight of compliance with terms of the CPP agreements including limitations on executive compensation, dividends, and stock repurchases. Among these steps, Treasury has named an Interim Chief Compliance Officer. However, Treasury has not finalized its plans for detecting noncompliance with CPP requirements or for taking enforcement actions. Without a more structured mechanism in place to ensure compliance with all CPP requirements, and as more institutions continue to participate in the program, ensuring compliance with these aspects of the program will become increasingly important and challenging. In its recently announced Financial Stability Plan, Treasury called for banks receiving future government funds to be held responsible for appropriate use of those funds through (1) stronger restrictions on dividend payment and executive compensation, and (2) enhanced reporting to the public, including reporting on lending activity. In addition, Treasury is in the process of drafting new regulations to implement the executive compensation requirements in the American Recovery and Reinvestment Act of 2009 (the Recovery Act). We will also continue to monitor the system that Treasury develops to ensure compliance with the agreements and the implementation of additional oversight and accountability efforts under its new plan. Treasury has also continued to make some progress in improving the transparency of TARP and a few weeks ago announced its plans for the remaining TARP funds. In our December 2008 report, we first raised questions about the effectiveness of Treasury’s communication strategy for TARP with Congress, the financial markets, and the public. These questions were further heightened in the COP’s January report, which raised similar questions about Treasury’s strategy for TARP. In response to our recommendation about its communication strategy, Treasury noted numerous publicly available reports, testimonies, and speeches. However, even after reviewing these items collectively, we found that Treasury’s strategic vision for TARP remained unclear. For example, Treasury initially outlined a strategy to purchase whole loans and mortgage-backed securities from financial institutions, but changed direction to make capital investments in qualifying financial institutions as the global community opted to move in this direction. However, once Treasury determined that capital infusions were preferable to purchasing whole mortgages and mortgage-backed securities, it did not clearly articulate how the various programs—such as CPP, the Systemically Significant Failing Institutions (SSFI) program, and the Targeted Investment Program (TIP)—would work collectively to help stabilize financial markets. For instance, Treasury has used similar approaches—capital infusions—to stabilize healthy institutions under CPP as well as SSFI and TIP, albeit with more stringent requirements. Moreover, with the exception of institutions selected for TIP being viewed as able to raise private capital, both SSFI and TIP share similar selection criteria. Further, the same institution may be eligible for multiple programs. At least two institutions (Citigroup and Bank of America) currently participate in more than one program, adding to the confusion about Treasury’s strategy and vision for implementing TARP. Other actions also have raised additional questions about Treasury’s strategy. For example, Treasury announced the first institution under TIP weeks before the program was established. Similarly, the Asset Guarantee Program was established after Treasury announced that it would guarantee assets under such a program, but many of the details of the program have yet to be worked out. Since our January report, Treasury has taken three key actions related to our recommendation about the need for a clearly articulated vision for the program. On February 10, Treasury announced the Financial Stability Plan, which outlined a set of measures to address the financial crisis and restore confidence in U.S. financial and housing markets. The plan appears to be an approach designed to resolve the credit crisis by restarting the flow of credit to consumers and businesses, strengthening financial institutions, and providing aid to homeowners and small businesses. On February 25, Treasury announced the standardized terms and conditions for eligible financial institutions participating in the Capital Assistance Program (CAP). Under CAP, an eligible institution that is found by its federal banking regulator to need additional capital to continue lending and absorb losses in a severe economic downturn will be eligible to participate in CAP. Such institutions will be eligible to receive a capital investment from Treasury, with regulatory approval, in the form of preferred securities that are convertible into common equity to help absorb losses and serve as a bridge to receiving private capital. A key element of Treasury’s Financial Stability Plan, CAP is designed to ensure that, in severe economic conditions, the largest U.S. bank holding companies have sufficient capital to support lending to creditworthy homeowners and businesses. As part of this effort, the federal banking regulators—the Board of Governors of the Federal Reserve System, Office of the Comptroller of the Currency, Federal Deposit Insurance Corporation, and Office of Thrift Supervision—announced that they will begin conducting a one-time forward-looking capital assessment (or stress test) of the balance sheets of the 19 largest bank holding companies with assets exceeding $100 billion. These institutions are required to participate in the coordinated supervisory capital assessment and may obtain additional capital from CAP if necessary. Regulators noted that the capital assessment process for all eligible institutions is expected to be completed by April 30, 2009. On March 4, 2009, Treasury unveiled its Making Home Affordable program, which is based in part on the use of TARP funds. Among other things, the plan is designed to do the following: It will use $75 billion ($50 billion from TARP funds) to modify the loans of up to 3-4 million homeowners to avoid potential foreclosure. The goal of modifying the mortgages of these homeowners is to reduce the amount owed per month to sustainable levels (a mortgage debt-to-income ratio of 31 percent). Treasury will share the cost of restructuring the mortgages with the other stakeholders (e.g., financial institutions holding whole loans or investors if loans have been securitized). Treasury announced a series of financial incentives for the loan servicers, mortgage holders/investors, and borrowers that are intended to “pay for success,” encourage borrowers to continue paying on time under the modified loan, and encourage servicers and mortgage holders/investors to modify at-risk loans before the borrower falls behind on a payment. It includes an initiative to help up to 4-5 million homeowners to refinance loans owned or guaranteed by Freddie Mac and Fannie Mae at current market rates. According to Treasury, these homeowners would not otherwise be able to refinance their loans at the conforming loan rates because the declining value of their homes has left them with little or no equity. Refinancing at current mortgage rates could help homeowners save thousands of dollars on their annual mortgage payments. It increases Treasury’s funding commitment to Fannie Mae and Freddie Mac to ensure the strength and security of the mortgage market and to help maintain mortgage affordability. The $200 billion funding commitment is based on authority granted to Treasury under the Housing and Economic Recovery Act of 2008. We will continue to monitor the development and implementation of Treasury’s plan, including how its actions address the challenges we have previously identified. Treasury also established the Auto Industry Financing Program (AIFP) in December 2008 to prevent a disruption of the domestic automotive industry that would pose systemic risk to the nation’s economy. Under this program, Treasury has lent $13.4 billion to GM and $4 billion to Chrysler to allow the automakers to continue operating while working out details of their plans to become solvent, such as achieving concessions with stakeholders. The loans were designed to allow the automakers to operate through the first quarter of 2009 with recognition that after that point GM and Chrysler would need additional funds or have to take other steps, such as an orderly bankruptcy. As required by the terms of their loan agreements, GM and Chrysler submitted restructuring plans to Treasury in February that describe the actions the automakers will take to become financially solvent. Because of the continued sluggish economy and lower than expected revenues, GM and Chrysler are requesting an additional $16.6 billion and $5 billion in federal financial assistance, respectively. Treasury is currently assessing the automakers’ restructuring plans and determining what the government’s role will be in future assistance. By March 31, 2009, GM and Chrysler must report to the Secretary of the Treasury on their progress in implementing these restructuring plans. The Secretary will then determine whether the companies have made sufficient progress in implementing the restructuring plans; if they have not, the loans are automatically accelerated and become due 30 days later. As part of our oversight responsibilities for TARP, we are monitoring Treasury’s implementation of AIFP, including the auto manufacturers’ use of federal funds and development of the required restructuring plans. Treasury has made progress in establishing its management infrastructure for TARP, including in hiring, overseeing contracts, and establishing internal controls. However, hiring for OFS is still ongoing, Treasury is working to improve its oversight of contractors, and its development of a system of internal control is still evolving. In the hiring area—one that we highlighted in our first report—Treasury took steps to help maintain continuity of leadership within OFS during and after the transition to the new administration. Specifically, Treasury ensured that interim chief positions would be filled to ensure a smooth transition and used direct-hire authority and various other appointments to bring a number of career staff on board quickly. OFS has increased its overall staff since our December 2008 report from 48 to 90 employees as of January 26, which includes an increase of permanent staff from 5 to 38. Treasury officials recently told us that the number of permanent staff had increased to 60. While progress has been made since our last report, the number of temporary and contract staff who will be needed to serve long- term organizational needs remains unknown. Because TARP has added many new programs since it was first established in October and program activities are changing under the new administration, we recognize that Treasury may find it difficult to determine OFS’s long-term organizational needs at this time. However, such considerations will be vital to retaining institutional knowledge in the organization. Treasury’s use of existing contract flexibilities has enabled it to enter into agreements and award contracts quickly in support of TARP. However, Treasury’s use of time-and-materials contracts, although authorized when flexibility is needed, can increase the risk that government dollars will be wasted unless adequate mechanisms are in place to oversee contractor performance. In this regard, Treasury has improved its oversight of contractors, including those using time-and- materials pricing. In addition, while Treasury has taken the important step of recently issuing an interim regulation outlining the process for reviewing and addressing conflicts of interest among new contractors and financial agents, it is still reviewing existing contracts or agreements to ensure conformity with the new regulation. We believe this step is a necessary component of a comprehensive and complete system to ensure that all conflicts are fully identified and appropriately addressed. OFS has adopted a framework for developing and implementing its system of internal control for TARP activities. OFS plans to use this framework to develop specific policies, drive communications on expectations, and measure compliance with internal control standards and policies. However, it has yet to develop comprehensive written policies and procedures governing TARP activities or implement a disciplined risk- assessment process. In each of these areas, we made additional recommendations. Specifically, we recommended that Treasury continue to expeditiously hire personnel needed to carry out and oversee TARP. For contracting oversight, we recommended that Treasury expedite efforts to ensure that sufficient personnel are assigned and properly trained to oversee the performance of all contractors, especially for contracts priced on a time-and-materials basis, and move toward fixed-price arrangements whenever possible as program requirements are better defined over time. We also recommended that Treasury review and renegotiate existing conflict-of-interest mitigation plans, as necessary, to enhance specificity and conformity with the new interim conflicts of interest regulation and that it take continued steps to manage and monitor conflicts of interest and enforce mitigation plans. Finally, we recommended that Treasury, in addition to developing a comprehensive system of internal controls, develop and implement a well- defined and disciplined risk-assessment process, because such a process is essential to monitoring the status of TARP programs and identifying any risks that announced programs will not be adequately funded. We will continue to monitor OFS’s hiring and contracting practices and implementation of the internal control framework, which is vital to TARP’s effectiveness. It is still too early in TARP’s implementation to see measurable results in many areas given that program actions have only recently occurred and there are time lags in the reporting of data. Even with more time and better data, it will remain difficult to separate the impact of TARP activities from the effects of other economic forces. Some indicators suggest that the cost of credit has declined in interbank, mortgage, and corporate debt markets since the December report. However, while perceptions of risk (as measured by premiums over Treasury securities) have declined in interbank markets, they have changed very little in corporate bond and mortgage markets. Finally, as noted in December, these indicators may be suggestive of TARP’s ongoing impact, but no single indicator or set of indicators can provide a definitive determination of its effects because of the range of actions that have been and are being taken to address the current crisis. These include coordinated efforts by U.S. regulators—namely, the Federal Deposit Insurance Corporation, the Board of Governors of the Federal Reserve System, and the Federal Housing Finance Agency—as well as actions by financial institutions to mitigate foreclosures. For example, a large drop in mortgage rates occurred shortly after the Federal Reserve announced it would purchase up to $500 billion in mortgage-backed securities, highlighting the fact that policies outside of TARP may have important effects on credit markets. We will continue to refine and monitor the indicators. Additionally, we plan to use the Treasury survey data in our efforts to evaluate changes in lending activity resulting from CPP. We recognize that the data has certain limitations primarily that it is self-reported and difficult to benchmark because it is unique. Nonetheless, we think it will prove valuable in future analyses. You also asked that I discuss the impact of TARP and related activities on the national debt and borrowing. Congress has assigned to the Treasury Department the responsibility to borrow the funds necessary to finance the gap between cash in and cash out subject to a statutory limit. Since the onset of the current recession in December 2007, the gap between revenues and outlays has grown. Because the Treasury must borrow the funds disbursed, TARP and other actions taken to stabilize the financial markets increase the need to borrow so adding to the federal debt. Also, federal borrowing needs typically increase during an economic downturn—largely because tax revenues decline while expenditures increase for programs to assist those affected by the downturn. In addition, the American Recovery and Reinvestment Act enacted on February 17, 2009 contains both decreases in revenues and increases in spending. Further, all of this takes place in the context of the longer-term fiscal outlook, which will present Treasury with continued financing challenges even after the return of financial stability and economic growth. Treasury’s primary debt management goal is to finance the government’s borrowing needs at the lowest cost over time. Issuing debt through regularly scheduled auctions lowers borrowing costs because investors and dealers value liquidity and certainty of supply. Treasury issues marketable securities that range in maturity from one month to 30 years and sells them at auction on a pre-announced schedule. The mix of securities that Treasury has outstanding changes regularly as new debt is issued. The mix of securities is important because it can have a significant influence on the federal government’s interest payments. Longer-term securities typically carry higher interest rates—or cost to the government—primarily due to concerns about future inflation. However, these longer-term securities offer the government the certainty of knowing what the Treasury’s payments will be over a longer period. At the end of February 2009, Treasury’s outstanding marketable securities stood at just under $6 trillion—an increase of $1.476 trillion since December 31, 2007. As shown in figure 1, a large portion of this debt increase was in the form of short-term cash management bills (CM bills). Between October 1, 2008 and February 28, 2009 Treasury issued $1.035 trillion in CM bills, of which $510 billion were outstanding at the end of February. Interest rates have decreased dramatically since the start of the financial crisis, particularly for short-term debt. Figure 2 below illustrates the size of that drop. The impact of this drop can be seen in lower borrowing costs—indeed, the budget shows net interest declining in fiscal year 2009. Although these relatively low interest rates have reduced Treasury’s borrowing costs, the increasing amount of short-term debt that needs to be rolled over does present challenges. As shown in figure 3, approximately $2.5 trillion—or 41 percent of total outstanding marketable securities will mature in 2009— and will have to be refinanced. As Treasury borrows to meet its current needs, Treasury must also plan for rolling over large amounts of debt in the short term. Treasury has said that it “recognizes the need to monitor short-term issuance versus longer dated issuance.” Market experts generally believe that Treasury needs to increase the average maturity of its debt portfolio in part to lock in relatively low long-term rates and to ensure adequate borrowing capacity in the coming years. To support Congress’ oversight of the use of TARP funds we have work underway looking at how Treasury has financed borrowing associated with the recent financial crisis and at additional ideas for debt management that might make sense going forward. Total borrowing will increase by trillions of dollars this year, not solely due to TARP and other activities aimed at stabilizing the financial system. Debt also grows in response to the economic slowdown as revenues fall and spending for some programs grows. Further, both the tax and spending provisions of the Recovery Act will also increase debt. All of this contributes to the borrowing challenge faced by the Treasury. As this Committee well knows, debt is also held in governmental accounts—such as the Social Security Trust Fund. This debt is included in the total debt subject to limit. The debt limit was increased by the Emergency Economic Stabilization Act of 2008 and the Recovery Act, but with only $1.2 trillion remaining under the limit, it will have to be raised again. The combination of slower growth and greater debt lead to increases in publicly-held debt as a share of our economy—The President’s budget projects debt reaching 65 percent of gross domestic product in 2010 and remaining at that level for the rest of the decade. Today Congress, the executive branch and the American people are understandably focused on restoring financial stability and economic growth. At some point, however, the nation’s leaders will need to apply the same level of intensity to the serious long-term fiscal challenges facing the federal government. Mr. Chairman and Members of the Subcommittee, I appreciate the opportunity to discuss this critically important issue and would be happy to answer any questions that you may have. Thank you. For further information on this testimony, please contact Thomas J. McCool on (202) 512-2642 or mccoolt@gao.gov. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses our work on the Troubled Asset Relief Program (TARP), under which the Department of the Treasury (Treasury) has the authority to purchase and insure up to $700 billion in troubled assets held by financial institutions through its Office of Financial Stability (OFS). As Congress may know, Treasury was granted this authority in response to the financial crisis that has threatened the stability of the U.S. banking system and the solvency of numerous financial institutions. The Emergency Economic Stabilization Act (the act) that authorized TARP on October 3, 2008, requires GAO to report at least every 60 days on findings resulting from our oversight of the actions taken under the program. We are also responsible for auditing OFS's annual financial statements and for producing special reports on any issues that emerge from our oversight. To carry out these oversight responsibilities, we have assembled interdisciplinary teams with a wide range of technical skills, including financial market and public policy analysts, accountants, lawyers, and economists who represent combined resources from across GAO. In addition, we are building on our in-house technical expertise with targeted new hires and experts. The act also created additional oversight entities--the Congressional Oversight Panel (COP) and the Special Inspector General for TARP (SIGTARP)--that also have reporting responsibilities. We are coordinating our work with COP and SIGTARP and are meeting with officials from both entities to share information and coordinate our oversight efforts. These meetings help to ensure that we are collaborating as appropriate and not duplicating efforts. This testimony is based primarily on our January 30, 2009 report, the second under the act's mandate, which covers the actions taken as part of TARP through January 23, 2009, and follows up on the nine recommendations we made in our December 2, 2008 report.3 This statement also provides additional information on some recent program developments, including Treasury's new financial stability plan and, as you requested, provides some insights on our ongoing work on the implications of actions related to the financial crisis on federal debt management. Our oversight work under the act is ongoing, and our next report is due to be issued by March 31, 2009, as required. Specifically, this statement focuses on (1) the nature and purpose of activities that have been initiated under TARP; (2) the status of OFS's hiring efforts, use of contractors, and development of a system of internal control; (3) implications of TARP and other events on federal debt management, and (4) preliminary indicators of TARP's performance. To do this work, we reviewed documents related to TARP, including contracts, agreements, guidance, and rules. We also met with OFS, contractors, federal agencies, and officials from all eight of the first large institutions to receive disbursements. We plan to continue to monitor the issues highlighted in our prior reports, as well as future and ongoing capital purchases, other more recent transactions undertaken as part of TARP (for example, guarantees on assets of Citigroup and Bank of America), and the status of other aspects of TARP.
USPS’s financial condition continued to decline over the past fiscal year and its financial outlook is poor for fiscal year 2011 and the foreseeable future. Key USPS results for fiscal year 2010 included a $1.0 billion decline in total revenue to $67.1 billion, and a $3.7 billion increase in total expenses to $75.6 billion, resulting in a record loss of about $8.5 billion, a $1.8 billion increase in outstanding debt (which left $1.2 billion of available borrowing authority), a total of $12 billion in outstanding debt due to the Treasury, and a $1.2 billion cash balance at the end of the fiscal year. USPS has recently released its budget for fiscal year 2011, projecting a $6.4 billion loss (see fig. 1)—one of the largest in USPS history— including the impact of a $5.5 billion payment due in 2011 to prefund retiree health benefits; a $3 billion increase in outstanding debt due to the Department of the Treasury (Treasury), thereby reaching its $15 billion statutory limit; and a $2.7 billion cash shortfall at the end of the fiscal year. USPS’s revenue drop in fiscal year 2010 was driven by continuing declines in total mail volume. In fiscal year 2010, mail volume decreased about 6 billion pieces from the previous fiscal year to 171 billion pieces. This volume was about 20 percent below the peak of 213 billion pieces delivered during fiscal year 2006. Most of the volume declines were in profitable First-Class Mail—which were particularly significant because the average piece of First-Class Mail generated about three times the profitability of the average piece of Standard Mail. USPS currently projects mail volume to increase by about 2 billion pieces in fiscal year 2011. In this fiscal year, First-Class Mail is expected to decrease by 3 billion pieces, but Standard Mail is expected to increase by 5 billion pieces. With these volume changes and expected small rate increases, USPS projects revenues to increase $0.6 billion in fiscal year 2011. Meanwhile, USPS’s expenses increased by $3.7 billion in fiscal year 2010 compared to fiscal year 2009 for several reasons. First, in fiscal year 2010, USPS made its statutorily required payment of $5.5 billion to prefund health benefits for its retirees, in contrast to fiscal year 2009 when Congress deferred all but $1.4 billion of USPS’s scheduled payment of $5.4 billion. Second, USPS’s workers’ compensation costs in fiscal year 2010 were $3.6 billion, up $1.3 billion from the previous fiscal year, primarily from the non-cash effect of changes in the discount rates used to estimate the liability. Third, results of USPS cost savings efforts in fiscal year 2010 were insufficient to offset rising costs in other areas. According to USPS, it achieved a total of close to $13 billion in cost savings from fiscal years 2006 through 2010 (see fig. 2), primarily by reducing 280 million work hours and its workforce by 131,000 employees. Most savings resulted from attrition, reductions in overtime, and changes in postal operations. USPS reported saving $3 billion in fiscal year 2010, primarily because of a reduction of 75 million work hours—half the savings achieved in fiscal year 2009. Looking forward, USPS projects cost savings of $2 billion in fiscal year 2011, primarily from continued attrition and associated savings. As its core product—First-Class Mail—continues to decline, USPS must modernize and restructure to become more efficient, control costs, keep rates affordable, and meet changing customer needs. To do so, USPS will need to become much leaner and more flexible. Key challenges include the following: Mail volume and changing use of the mail: USPS projects mail volume to continue declining to about 150 billion pieces by fiscal year 2020—about 30 percent below its 2006 peak. Most of the declines are projected to be in profitable First-Class Mail. Use of the mail is changing as communications and payments continue to shift to electronic alternatives—a shift that is being facilitated by rapid adoption of broadband. These trends expose weaknesses in USPS’s business model, which has relied on volume growth to help cover costs. Postal revenues: USPS expects revenue to stagnate in the next decade as continued declines in mail volume are offset by rate increases. Rate increases are generally limited by the inflationary price cap on market- dominant products that generate close to 90 percent of USPS revenue. Compensation and benefit costs: Compensation and benefits, including retiree health benefits and workers’ compensation, totaled about $60 billion in fiscal year 2010, or close to 80 percent of USPS costs. USPS pays a higher share of employee health and life insurance premiums than other federal agencies. Difficulties achieving network realignment: Realigning USPS’s mail processing and retail facilities will be crucial for it to achieve sustainable cost reductions and productivity improvements, but limited progress has been made in rightsizing these networks to eliminate costly excess capacity. Although USPS is working to consolidate some mail processing operations, it has closed few large mail processing facilities since 2005. Similarly, its network of post offices and postal retail facilities has remained largely static despite expanded use of retail alternatives and population shifts. Capital investment: Continuing losses from operations have constrained funds for USPS capital investment. USPS’s purchases of capital property and equipment and building improvements have declined in recent years, from $1.8 billion in fiscal year 2009 to $1.4 billion in fiscal year 2010. The deferral of maintenance could impede modernization and efficiency gains from optimizing mail processing, retail, and delivery networks. Further, USPS has delayed buying new delivery vehicles for lack of capital resources. We have an ongoing review of USPS’s delivery fleet of about 185,000 vehicles, including about 140,000 long-life vehicles purchased in the late 1980s and early 1990s that are nearing the end of their 24-year expected operating time frame. USPS has estimated replacing its delivery fleet will cost about $5 billion. Lack of borrowing capacity: USPS expects to increase its outstanding debt to Treasury during fiscal year 2011 by $3 billion, thereby reaching its total statutory debt limit of $15 billion. Even with this debt increase, USPS projects a cash shortfall at the end of this fiscal year. Its cash outlook is uncertain, as indicated by recent experience. USPS reported in August 2010 that it “would likely experience a cash shortfall if legislation similar to that passed in September 2009 is not passed.” USPS ended fiscal year 2010 with cash of about $1.2 billion and remaining annual borrowing authority of an additional $1.2 billion, or slightly more than the funds needed for one biweekly payroll. USPS projects it will have insufficient cash at the end of fiscal year 2011 to meet all of its obligations. Large unfunded financial obligations and liabilities: USPS’s unfunded obligations and liabilities were roughly $100 billion at the end of fiscal year 2010. Looking forward, USPS will continue to be challenged by these financial obligations and liabilities, together with expected large financial losses and long-term declines in First-Class Mail volume. Proposed postal legislation, including S. 3831, provides a starting point for considering key issues where congressional decisions are needed to help USPS undertake needed reforms. This bill is based on legislative proposals USPS made this past spring. Resolving large USPS funding requirements for pension and retiree health benefits is important. It is equally important to USPS’s future to address constraints and legal restrictions, such as those related to closing facilities, so that USPS can take more aggressive action to reduce costs. Urgent action is needed as some changes, such as rightsizing networks, will take time to implement and produce results. In addition, including incentives and oversight mechanisms would make an important contribution to assuring an appropriate balance between providing USPS with more flexibility and assuring sufficient transparency, oversight, and accountability. Congressional decisions may involve difficult trade-offs related to USPS’s role as a federal entity expected to provide universal mail delivery and ready access to postal retail service while being self-financing through businesslike operations. Future USPS actions and other stakeholder actions are expected to be informed and guided based on congressional decisions related to public policy questions, such as: Benefits: What changes, if any, should be made to USPS pension and retiree health benefit obligations and payment schedules? What would be the impact on the federal budget? Delivery: Should the long-standing requirement for Saturday delivery be dropped so USPS can implement its proposal to reduce delivery frequency to 5 days a week? What would be the specific effects on operations, costs, workforce mix, employees, service, competition, the value of mail, mail volume, and revenue? How would shifting to 5-day delivery affect customers including business mailers and the public? Post office closings: Should USPS have greater flexibility to rightsize its retail networks and workforce, which may involve closing post offices and moving retail services to alternative commercial locations that are often open more days and longer hours than postal facilities? Or should USPS retain its retail facilities and provide new nonpostal products and services? Nonpostal products: Should USPS be allowed to offer new nonpostal products and services that compete with private-sector firms? If so, how should fair competition be assured? Would it need additional capital for such initiatives? If so, how would they be financed? Processes for change: What role should Congress, the PRC, USPS, employees, and customers, including business mailers and the public, have in decisions on postal policy issues? What incentives and oversight mechanisms are needed as part of congressional actions to assure an appropriate balance between providing USPS with more flexibility and assuring sufficient transparency, oversight, and accountability? We have discussed several options that Congress and USPS could consider in a report we issued last April, and are currently conducting a congressionally requested review of USPS’s 5-day delivery proposal. In this testimony, we will highlight some options related to three areas that are also addressed by S. 3831—compensation and benefits, rightsizing networks and workforce, and expanding nonpostal activities. S. 3831 addresses key retiree health and pension benefit issues. Specifically, it requires OPM to recalculate USPS’s CSRS pension obligation in a way expected to make the federal government responsible for a greater share of USPS’s CSRS pension obligation. The bill also authorizes the USPS Board of Governors to transfer any part of a resulting pension surplus to the Postal Service Retiree Health Benefits Fund. The sponsor of S. 3831 has estimated that these legislative changes could result in an increase in the government’s pension obligations of approximately $50 billion. Such an increase could impact the federal budget deficit and require funding over time. USPS has said it cannot afford its required prefunding payments to the retiree health benefit fund on the basis of its significant volume and revenue declines, large losses, debt nearing its limit, and limited cost- cutting opportunities under its current authority. We have reported that Congress should consider providing financial relief to USPS, including modifying its retiree health benefit cost structure in a fiscally responsible manner. Several legislative proposals have been made to defer costs by revising statutory requirements, including extending and revising prefunding payments to the Retiree Health Benefits Fund, with smaller payment amounts in the short term followed by larger amounts later. Deferring some prefunding of these benefits would serve as short-term fiscal relief. However, deferrals also increase the risk that USPS will not be able to make future benefit payments as its core business declines. Therefore, it is important that USPS fund its retiree health benefit obligations—including prefunding these obligations—to the maximum extent that its finances permit. In addition to considering what is affordable and a fair balance of payments between current and future ratepayers, Congress would also have to address the impact of these proposals on the federal budget. Further, the Congressional Budget Office has raised concerns about how aggressive USPS’s cost-cutting measures would be if prefunding payments for retiree health care were reduced. Congress could revisit other aspects of the postal compensation and benefits framework. USPS is required to maintain compensation and benefits comparable to the private sector, a requirement that has been a source of disagreement between USPS and its unions in collective bargaining and binding arbitration. If USPS and its unions go to arbitration, there is no statutory requirement for arbitrators to consider USPS’s financial condition. We continue to favor such an arbitration requirement. The law also requires USPS’s fringe benefits to be at least as favorable as those in effect when the Postal Reorganization Act of 1970 was enacted. Career employees participate in federal pension and benefit programs, and USPS covers a higher proportion of its employees’ health care and life insurance premiums than most other federal agencies. USPS is also required by law to participate in the federal workers’ compensation program, and some benefits paid exceed those provided in the private sector. Furthermore, USPS employees in this program can choose not to retire when they become eligible to retire, and they often decide to remain on the more generous workers’ compensation rolls. Congressional action is needed to speed USPS’s progress in rightsizing its networks and workforce, and S. 3831 seeks to address these issues. Such progress is limited by both stakeholder resistance and statutory requirements. USPS has costly excess capacity and inadequate flexibility to quickly reduce costs in its processing and retail networks. USPS has faced formidable resistance to facility closures and consolidations because of concerns about possible effects on service, employees, and communities, particularly in small towns or rural areas. We have suggested that Congress consider establishing a panel similar to the military Base Realignment and Closure Commissions to facilitate action and progress. Such panels have successfully informed prior difficult restructuring decisions. The panel could consider options for USPS’s networks including the following: Mail processing: Decisions to maintain or close facilities are best made in the context of a comprehensive, integrated approach for optimizing the processing network. Issues include how to inform Congress and the public, address resistance, and ensure employees will be treated fairly. Related issues include whether to relax current delivery standards to enable additional facility closures and associated savings. Retail: USPS has retained most of its retail facilities in recent years despite the growing use of less costly alternatives to traditional post offices, such as self-service kiosks and stamp sales in grocery stores, drug stores, and over the Internet. USPS has called for statutory changes to facilitate modernizing its retail services. USPS has asked Congress to change the law so it can diversify into nonpostal areas to find new opportunities for revenue growth, and S. 3831 would authorize such action. This could involve USPS entering into new business areas or earning revenues from partners selling nonpostal products at USPS facilities. About 10 years ago, we reported that USPS incurred losses on early electronic commerce and other nonpostal initiatives, and its management of its electronic commerce initiatives was fragmented, with inconsistent implementation and incomplete financial information. Congress then restricted USPS from engaging in new nonpostal activities in the Postal Accountability and Enhancement Act of 2006. Allowing USPS to expand into new nonpostal activities would raise issues about the areas in which it should be allowed to compete with the private sector, how to assure fair competition, how to mitigate risks associated with entering new lines of business, and how to finance such efforts. Related issues could include whether USPS’s mission and role as a government entity with a monopoly should be changed, what transparency and accountability would apply, whether USPS would be subject to the same regulatory entities and regulations as its competitors, and whether losses would be borne by postal ratepayers or taxpayers. A senior USPS official told us that USPS is studying various possibilities for introducing new products and services. A continued issue is whether USPS would make money if it was allowed to compete in new nonpostal areas. USPS has reported that if it could enter such areas, such as banking or sales of consumer goods, its opportunities would be limited by its high cost structure and the relatively light customer traffic of post offices compared with commercial retailers. (There are 600 weekly counter customers at the average post office, compared to 20,000 at the average major supermarket, according to USPS.) USPS has said that the possibility of building a sizable presence in logistics, banking, integrated marketing, and document management was currently not viable because of its net losses, high wage and benefit costs, and limited access to cash to support necessary investment. USPS concluded that building a sizable business in any of these areas would require “time, resources, new capabilities (often with the support of acquisitions or partnerships) and profound alterations to the postal business model.” In summary, the need for postal reform continues as business and consumer use of the mail continues to evolve. Congress and USPS urgently need to reach agreement on a package of actions to restore USPS’s financial viability and enable it to begin making necessary changes. Mr. Chairman, that concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have. For further information about this statement, please contact Phillip Herr at (202) 512-2834 or herrp@gao.gov. Individuals who made key contributions to this statement include Joseph Applebaum, Chief Actuary; Susan Ragland, Director, Financial Management and Assurance; Amy Abramowitz; Teresa Anderson; Joshua Bartzen; Kenneth John; Hannah Laufe; SaraAnn Moessbauer; Robert Owens; Crystal Wesco; and Jarrod West. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The U.S. Postal Service's (USPS) financial condition and outlook deteriorated sharply during fiscal years 2007 through 2009. USPS actions to cut costs and increase revenues were insufficient to offset declines in mail volume and revenues. Mail volume declined from 213 billion pieces in fiscal year 2006, to 171 billion pieces in fiscal year 2010--or about 20 percent. Volume declines resulted from the recession and changes in the use of mail as transactions and messages continued to shift to electronic alternatives. In this environment, USPS initiatives to increase revenues had limited results. USPS expects mail volume to decline further to about 150 billion pieces by 2020. This trend exposes weaknesses in USPS's business model, which has relied on growth in mail volume to help cover costs. GAO and others have reported on options for improving USPS's financial condition, including GAO's April 2010 report on USPS's business model (GAO-10-455). Recently, legislation has been introduced that addresses USPS's finances and the need for flexibility to help modernize operations. This testimony discusses (1) updated information on USPS's financial condition and outlook, (2) the need to modernize and restructure USPS, and (3) key issues that need to be addressed by postal legislation. It is based primarily on GAO's past and ongoing work. In comments on our statement, USPS generally agreed with its accuracy and provided technical comments that were incorporated as appropriate. USPS's financial condition continued to decline in fiscal year 2010 and its financial outlook is poor for fiscal year 2011 and the foreseeable future. Key results for fiscal year 2010 included total revenue of $67.1 billion and total expenses of $75.6 billion, resulting in (1) a record loss of $8.5 billion--up $4.7 billion from fiscal year 2009, (2) a $1.8 billion increase in outstanding debt to the Treasury, thus making the total outstanding debt $12 billion, and (3) a $1.2 billion cash balance at the end of the fiscal year. USPS's budget for fiscal year 2011 projects (1) a $6.4 billion loss, (2) a $3 billion increase in debt to the $15 billion statutory limit, and (3) an end-of-year cash shortfall of $2.7 billion. USPS has reported achieving close to $13 billion in cost savings in the past 5 fiscal years. However, as its most profitable core product, First-Class Mail, continues to decline, USPS must modernize and restructure to become more efficient, control costs, keep rates affordable, and meet changing customer needs. To do so, USPS needs to become much leaner and more flexible. Key challenges include: changing use of the mail; compensation and benefit costs that are close to 80 percent of total costs; difficulties realigning networks to remove costly excess capacity and improve efficiency; constrained capital investment, which has declined to one of the lowest levels in two decades and led to delays in buying new vehicles; lack of borrowing capacity when USPS reaches its statutory debt limit; and large unfunded financial obligations and liabilities of roughly $100 billion at the end of fiscal year 2010. Proposed postal legislation, including S. 3831, provides a starting point for addressing key issues facing USPS and facilitating changes, such as rightsizing networks, that will take time to implement and produce results. Also, decisions on postal issues may involve trade-offs related to USPS's role as a federal entity expected to provide universal postal service while being self-financing through businesslike operations. Three key areas addressed by the bill include compensation and benefits; rightsizing USPS networks and workforce; and whether to allow USPS to expand its nonpostal activities. For example, resolving large USPS funding requirements for retiree health benefits is important, while continuing to prefund retiree health benefits to the extent USPS's finances permit. It is equally important to address constraints and legal restrictions, such as those related to closing facilities, so that USPS can take more aggressive action to reduce costs. Allowing USPS to expand into nonpostal activities raises issues of how to mitigate risks associated with new lines of business, assure fair competition with the private sector, and how to finance such efforts. Congress and USPS urgently need to take action to restore USPS's financial viability as business and consumer use of the mail continues to evolve.
The final medical privacy regulation requires that most providers obtain patient consent to use or disclose health information before engaging in treatment, payment, or health care operations. As defined in the regulation, health care operations include a variety of activities such as undertaking quality assessments and improvement initiatives, training future health care professionals, conducting medical reviews, and case management and care coordination programs. The consent form must alert patients to the provider’s notice of privacy practices (described in a separate document) and notify them of their right to request restrictions on the use and disclosure of their information for routine health care purposes. Providers are not required to treat patients who refuse to sign a consent form, nor are they required to agree to requested restrictions. The consent provision applies to all covered providers that have a direct treatment relationship with patients. The regulation also specifies several circumstances where such prior patient consent is not required. The privacy regulation does not require health plans to obtain written patient consent. This approach to patient consent for information disclosures differs from that in HHS’ proposed privacy regulation, issued for public comment November 3, 1999. The proposed regulation would have permitted providers to use and disclose information for treatment, payment, and health care operations without written consent. At the time, HHS stated that the existing consent process had not adequately informed patients of how their medical records could be used. Comments HHS received on this provision were mixed. Some groups approved of this approach, saying it would ensure that covered entities could share information to provide effective clinical care and operate efficiently, while not creating administrative requirements that would add little to individual privacy. However, others wrote that individuals should be able to control to whom, and under what circumstances, their individually identifiable health information would be disclosed, even for routine treatment, payment, or health care operations. The extent to which the privacy regulation’s consent requirement will be a departure from business as usual varies by type of provider. Under current practices, physicians and hospitals generally obtain consent to use patient data for processing insurance claims, but they obtain consent substantially less often for treatment or health care operations. Pharmacists, however, typically do not have consent procedures in place for any of the routine purposes included in the regulation. Specifically: Most, but not all, physicians get signed written consent to use patient data for health insurance payment. Exceptions to this practice include emergency situations and patients who choose to pay for their treatment “out of pocket” to avoid sharing sensitive information with an insurer. However, physicians do not typically seek approval to use patient data to carry out treatment or health care operations. Nearly all hospitals routinely obtain written consent at the time of admission, at least for release of information to insurance companies for payment purposes. A 1998 study of large hospitals found that 97 percent of patient consent forms sought release of information for payment, 50 percent addressed disclosure of records to other providers, and 45 percent requested consent for utilization review, peer review, quality assurance, or prospective review—the types of health care management activities considered health care operations in the federal privacy regulation. Pharmacies do not routinely obtain patient consent related to treatment (i.e., before filling a prescription), payment, or health care operations. However, industry representatives told us that pharmacies conducting disease management programs (specialized efforts to ensure appropriate pharmaceutical use by patients with certain chronic conditions) typically seek consent to share information with physicians about the patients’ condition, medical regimen, and progress. The new consent requirement makes several important changes to current practices that have implications for patients and providers. For patients, they will be made aware that their personal health information may be used or disclosed for a broad range of purposes including health care operations. Other provisions of the privacy regulation grant patients additional protections, including the right to access their records, to request that their records be amended, to obtain a history of disclosures, and to request restrictions on how their information is used. For providers directly treating patients, they will have a legal obligation to obtain prior written consent and to use a form that meets specific content requirements. Supporters of the consent requirement argue that the provision gives patients an opportunity to be actively involved in decisions about the use of their data. Yet, many groups recognize that signing a provider’s consent form does not, per se, better inform patients of how their information will be used or disclosed. In addition, most provider organizations we interviewed told us that the privacy regulation’s consent requirement will be a challenge to implement and may impede some health care operations. The American Medical Association (AMA), the Bazelon Center for Mental Health Law, and the Health Privacy Project (HPP) indicated that the consent process offers important benefits to patients. These groups view the process of signing a consent form as a critical tool in focusing patient attention on how personal health information is being used. They assert that only providing patients with a notice of privacy practices is not sufficient because most patients are not likely to understand its importance, much less read it. The patient advocacy groups told us that the act of signing the consent can help make patients aware of their ability to affect how their information is used. This heightened awareness, in turn, may make patients more likely to read the notice of privacy practices or to discuss privacy issues with their health care provider. HPP cited the process of signing consent as offering an “initial moment” in which patients have an opportunity to raise questions about privacy concerns and learn more about the options available to them. This opportunity may be especially valuable to patients seeking mental health and other sensitive health care services. In contrast, many groups we interviewed question the value of the consent form for patients. For example, the Medical Group Management Association (MGMA) and the American Hospital Association (AHA) assert that the process of signing a consent form may be perfunctory, at best, and confusing, at worst. To some extent, patient advocacy groups we spoke with agree. They say that patients will be under pressure to sign the form without reading the notice, as providers can condition treatment upon obtaining consent. They contend that many patients may not find the consent process meaningful. They maintain that nevertheless it should be required for the benefit it offers patients who may be particularly interested in having a say about how their health information will be used. Health plan and provider organizations we interviewed told us that the consent requirement poses implementation difficulties for patients and providers both during the regulation’s initial implementation and beyond. The extent of these challenges and their potential implications vary by type of provider. In general, these organizations do not favor written consents for routine uses of patient information, although they support the regulation’s requirement to provide patients with privacy notices. The consent requirement would require pharmacists to change their current practices. Under the regulation, a patient must sign a consent form before a pharmacist can begin filling the prescription. According to the American Pharmaceutical Association and the National Association of Chain Drug Stores, this requirement would result in delays and inconvenience for patients when they use a pharmacy for the first time.Also, pharmacies would not be able to use patient information currently in their systems to refill prescriptions or send out refill reminders before receiving patient consent to do so. In addition, patients who spent time in different parts of the country and were accustomed to transferring their prescriptions to out-of-state pharmacies would have to provide consent to one or more pharmacies before their prescriptions could be filled. Pharmacy and other organizations have suggested that the privacy regulation should recognize a physician-signed prescription as indicative of patient consent or that pharmacies could be considered indirect providers and thus not subject to the consent requirement. Hospital organizations also raised concern about disruption of current practice and some loss of efficiency. AHA and Allina Health System representatives stated that the consent requirement could impede the ability of hospitals to collect patient information prior to admission, thus creating administrative delays for hospitals and inconvenience for some patients. In advance of nonemergency admissions, hospitals often gather personal data needed for scheduling patient time in operating rooms, surgical staff assignments, and other hospital resources. If the regulation is interpreted to include such activities as part of treatment or health care operations, hospitals would be required to get the patient’s signed consent before setting the preadmissions process in motion. Either a form would have to be mailed or faxed to the patient and sent back, or the patient would have to travel to the hospital to sign it. Physician and hospital groups expressed concern that the requirement would hinder their ability to conduct health care management reviews using archived records. For example, AMA and AHA told us that the regulation will not permit them to use much of the patient data gathered under previous consent forms. While the regulation has a transition provision that allows providers to rely on consents acquired before the regulation takes effect, the continuing validity of those preexisting consents would be limited to the purposes specified on the consent form. In most cases, the purposes specified were either treatment or billing. This means that providers would not be able to draw on those data for other purposes, including common health care management functions, such as provider performance evaluations, outcome analyses, and other types of quality assessments. Moreover, they said that in many cases it might not be feasible to retroactively obtain consent from former patients. Some have suggested revising the regulation to allow providers to use, without consent, all health information created prior to the regulation’s effective date. All of the organizations representing providers and health plans anticipate an additional administrative burden associated with implementing the new consent procedures, but the magnitude of the potential burden is uncertain. For example, if the use of new forms elicits more questions from patients about medical records privacy, as the provision’s supporters expect will happen, providers will have to devote more staff time to explaining consent and discussing their information policies. Similarly, health plan and provider advocates contend that focusing patients’ attention on their right to request restrictions on how their information is used could result in many more patients seeking to exercise that right. This, some believe, would require increased staff time for considering, documenting, and tracking restrictions. The privacy regulation expands the scope of the consent process to include the use and disclosure of personal health information for a wide range of purposes. This may help some patients become aware of how their medical information may be used. However, in general, provider and health plan representatives believe that the consent requirement’s benefits are outweighed by its shortcomings, including delays in filling prescriptions, impediments to hospital preadmission procedures, and difficulty in using archived patient information. Regardless of the presence of the consent requirement, providers are obligated under the regulation to protect the confidentiality of patient information. Moreover, with or without the consent requirement, patients’ rights established by the privacy regulation—to see and amend their records, to learn of all authorized uses of their information, and to request restrictions on disclosures—remain unchanged. HHS provided written technical comments on a draft of this report. In them, HHS remarked on the consent requirement’s applicability to archived patient medical records. Agency officials explained that a consent for either treatment, payment, or health care operations acquired before the regulation’s compliance date would be valid for continued use or disclosure of those data for all three of these purposes after that date. Under this interpretation, for example, prior consents to disclose patient information for insurance claims would permit uses for the full range of health care operations as well, unless specifically excluded in the consent that the patient signed. In our view, a better understanding of the implications of this provision may emerge from any revisions to the final regulation. Referring to material in appendix I, the agency expressed concern that we overgeneralized current state consent laws, which have complex requirements and vary significantly from one to another. HHS pointed out that some state laws require written consent in some circumstances that would be considered treatment, payment, or health care operations. We recognize that state laws are complex and vary widely in the type of health care information that is protected and the stringency of those protections. While it is difficult to generalize about state laws, we found that the statutes in the 10 states we examined were fairly consistent in not requiring written consent for the full range of uses and disclosures of patient information for treatment, payment, and health care operations. The agency provided other technical comments that we incorporated where appropriate. We are sending copies of this report to the Honorable Tommy G. Thompson, Secretary of HHS, and others who are interested. We will also make copies available to others on request. If your or your staff have any questions, please call me at (312) 220-7600 or Rosamond Katz, Assistant Director, at (202) 512-7148. Other key contributors to this report were Jennifer Grover, Joel Hamilton, Eric Peterson, and Craig Winslow. To examine how state privacy laws address the issue of patient consent to use health information, we reviewed certain laws in 10 states (Hawaii, Maine, Maryland, Minnesota, Montana, Rhode Island, Texas, Virginia, Washington, and Wyoming). We found that none of these state privacy statutes include a consent requirement as broad as that found in the privacy regulation. Although they generally prohibit using or disclosing protected health information without the patient’s permission, they include significant exceptions not present in the federal regulation. Essentially, none of the state statutes we reviewed requires consent for the full range of uses and disclosures of patient information for treatment and health care operations. The Minnesota and Wyoming statutes require consent to use patient health information for payment purposes. Two states recently attempted to enhance patient control over their personal health information. In 1996, Minnesota enacted a law that placed stringent consent requirements on the use of patient data for research. It stipulated that patient records created since January 1, 1997, not be used for research without the patient’s written authorization. Because such authorization was not obtained at the start of treatment, researchers had to retroactively seek permission. They soon found that many patients did not respond to requests for such authorization, either to approve or to reject the use of their data. The law was amended to permit the use of records in cases where the patient had not responded to two requests for authorization mailed to the patient’s last known address. At one major research institution in Minnesota, the Mayo Clinic, that change decreased the percentage of patient records that the patient consent requirement made unavailable for studies from 20.7 percent to 3.2 percent. In late 1998, Maine enacted a comprehensive law requiring specific patient authorization for many types of disclosures and uses of health information. The law took effect January 1, 1999, but was soon suspended by the state legislature in response to numerous complaints from the public. Particularly problematic was that “hospital directory” information could not be released without the patient’s specific written authorization. Therefore, until routine paperwork was completed, hospitals could not disclose patients’ room or telephone numbers when friends, family, or clergy tried to contact or visit them. Based on this experience, the Maine legislature substantially modified the law, which became effective on February 1, 2000. Among other changes, the revised law allows a hospital to list current patients in a publicly available directory unless a patient specifically requests to be excluded.
The Department of Health and Human Services issued a final regulation in December 2000 that established rights for patients with respect to the use of their medical records. The regulation requires that most providers obtain patient consent to use or disclose health information before engaging in treatment, payment, or health care operations. The privacy regulation's consent requirement will be more of a departure from current practice for some providers than for others. Most health care providers, with the exception of pharmacists, obtain some type of consent from patients to release information to insurers for payment purposes. The new requirement obligates most providers to obtain consent before they can use and disclose patient information. It also broadens the scope of consent to include treatment and a range of health care management activities. Supporters of the requirement believe that the process of signing a consent form provides an opportunity to inform and focus patients on their privacy rights. Others, however, are skeptical and assert that most patients will simply sign the form with little thought. In addition, provider and other organizations interviewed are concerned that the new consent requirement poses implementation difficulties. They contend that it could cause delays in filling prescriptions for patients who do not have written consents on file with their pharmacies, impede the ability of hospitals to obtain patient information prior to admission, hamper efforts to assess health care quality by precluding the use of patient records from years past, and increase administrative burdens on providers.
The tax gap is an estimate of the difference between the taxes—including individual income, corporate income, employment, estate, and excise taxes—that should have been paid voluntarily and on time and what was actually paid for a specific year. The estimate is an aggregate of estimates for the three primary types of noncompliance: (1) underreporting of tax liabilities on tax returns; (2) underpayment of taxes due from filed returns; and (3) nonfiling, which refers to the failure to file a required tax return altogether or on time. IRS’s tax gap estimates for each type of noncompliance include estimates for some or all of the five types of taxes that IRS administers. As shown in table 1, underreporting of tax liabilities accounted for most of the tax gap estimate for tax year 2001. IRS has estimated the tax gap on multiple occasions, beginning in 1979, relying on its Taxpayer Compliance Measurement Program (TCMP). IRS did not implement any TCMP studies after 1988 because of concerns about costs and burdens on taxpayers. Recognizing the need for current compliance data, in 2002 IRS implemented a new compliance study called the National Research Program (NRP) to produce such data for tax year 2001 while minimizing taxpayer burden. IRS has concerns with the certainty of the tax gap estimate for tax year 2001 in part because some areas of the estimate rely on old data, IRS has no estimates for other areas of the tax gap, and it is inherently difficult to measure some types of noncompliance. IRS used data from NRP to estimate individual income tax underreporting and the portion of employment tax underreporting attributed to self-employed individuals. The underpayment segment of the tax gap is not an estimate, but rather represents the tax amounts that taxpayers reported on time but did not pay on time. Other areas of the estimate, such as corporate income tax and employer-withheld employment tax underreporting, rely on decades-old data. Also, IRS has no estimates for corporate income, employment, and excise tax nonfiling or for excise tax underreporting. In addition, it is inherently difficult for IRS to observe and measure some types of underreporting or nonfiling, such as tracking cash payments that businesses make to their employees, as businesses and employees may not report these payments to IRS in order to avoid paying employment and income taxes, respectively. IRS’s overall approach to reducing the tax gap consists of improving service to taxpayers and enhancing enforcement of the tax laws. IRS seeks to improve voluntary compliance through efforts such as education and outreach programs and by attempting to simplify the tax process, such as by revising forms and publications to make them electronically accessible and more easily understood by diverse taxpayer communities. IRS uses its enforcement authority to ensure that taxpayers are reporting and paying the proper amounts of taxes through efforts such as examining tax returns and matching the amount of income taxpayers report on their tax returns to the income amounts reported on information returns it receives from third parties. IRS reports that it collected over $47 billion in 2005 from noncompliant taxpayers it identified through its various enforcement programs. In spite of IRS’s efforts to improve taxpayer compliance, the rate at which taxpayers pay their taxes voluntarily and on time has tended to range from around 81 percent to around 84 percent over the past three decades. Any significant reduction of the tax gap would likely depend on an improvement in the level of taxpayer compliance. Tax law simplification and reform both have the potential to reduce the tax gap by billions of dollars. The extent to which the tax gap would be reduced depends on which parts of the tax system would be simplified and in what manner as well as how any reform of the tax system is designed and implemented. Neither approach, however, will eliminate the gap. Further, changes in the tax laws and system to improve tax compliance could have unintended effects on other tax system objectives, such as those involving economic behavior or equity. Simplification has the potential to reduce the tax gap for at least 3 broad reasons. First, it could help taxpayers to comply voluntarily with more certainty, reducing inadvertent errors by those who want to comply but are confused because of complexity. Second, it may limit opportunities for tax evasion, reducing intentional noncompliance by taxpayers who can misuse the complex code provisions to hide their noncompliance or to achieve ends through tax shelters. Third, tax code complexity may erode taxpayers’ willingness to comply voluntarily if they cannot understand its provisions or they see others taking advantage of complexity to intentionally underreport their taxes. Simplification could take multiple forms. One form would be to retain existing laws but make them simpler. For example, in our July 2005 report on postsecondary tax preferences, we noted that the definition of a qualifying postsecondary education expense differed somewhat among some tax code provisions, for instance with some including the cost to purchase books and others not. Making definitions consistent across code provisions may reduce taxpayer errors. Although we cannot say the errors were due to these differences in definitions, in a limited study of paid preparer services to taxpayers, we found some preparers claiming unallowable expenses for books. Further, the Joint Committee on Taxation suggested that such dissimilar definitions may increase the likelihood of taxpayer errors and increase taxpayer frustration. Another tax code provision in which complexity may have contributed to the individual tax gap involves the earned income tax credit, for which IRS estimated a tax loss of up to about $10 billion for tax year 1999. Although some of this noncompliance may be intentional, we and the National Taxpayer Advocate have previously reported that confusion over the complex rules governing eligibility for claiming the credit could cause taxpayers to fail to comply inadvertently. Although retaining but simplifying tax code provisions may help reduce the tax gap, doing so may not be easy, may conflict with other policy decisions, and may have unintended consequences. The simplification of the definition of a qualifying child across various code sections is an example. We suggested in the early 1990s that standardizing the definition of a qualifying child could reduce taxpayer errors and reduce their burden. A change was not made until 2004. However, some have suggested that the change has created some unintended consequences, such as increasing some taxpayers’ ability to reduce their taxes in ways Congress may not have intended. Another form of simplification could be to eliminate or consolidate tax expenditures. Among the many causes of tax code complexity is the growing number of preferential provisions in the code, defined in statute as tax expenditures, such as tax exemptions, exclusions, deductions, credits, and deferrals. The number of these tax expenditures has more than doubled from 1974 through 2005. Tax expenditures can contribute to the tax gap if taxpayers claim them improperly. For example, IRS’s recent tax gap estimate includes a $32 billion loss in individual income taxes for tax year 2001 because of noncompliance with these provisions. Simplifying these provisions of the tax code would not likely yield $32 billion in revenue because even simplified provisions likely would have some associated noncompliance. However, the estimate suggests that simplification could have important tax gap consequences, particularly if simplification also accounted for any noncompliance that arises because of complexity on the income side of the tax gap for individuals. However, these credits and deductions serve purposes that Congress has judged to be important to advance federal goals. Eliminating them or consolidating them likely would be complicated, and would likely create winners and losers. Elimination also could conflict with other objectives such as encouraging certain economic activity or improving equity. Similar trade-offs exist with possible fundamental tax reforms that would move away from an income tax system to some other system, such as a consumption tax, national sales tax, or value added tax. Fundamental tax reform would most likely result in a smaller tax gap if the new system has few tax preferences or complex tax code provisions and if taxable transactions are transparent. However, these characteristics are difficult to achieve in any system and experience suggests that simply adopting a fundamentally different tax system may not by itself eliminate any tax gap. Any tax system could be subject to noncompliance, and their design and operation, including the types of tools made available to tax administrators affect the size of any corresponding tax gap. Further, the motivating forces behind tax reform likely include factors beyond tax compliance, such as economic effectiveness, equity, and burden, which could in some cases carry greater weight in designing an alternative tax system than ensuring the highest levels of compliance. Changing the tax laws to provide IRS with additional enforcement tools, such as expanded tax withholding and information reporting, could also reduce the tax gap by many billions of dollars, particularly with regard to underreporting—the largest segment of the tax gap. Tax withholding promotes compliance because employers or other parties subtract some or all of the taxes owed from a taxpayer’s income and remit them to IRS. Information reporting tends to lead to high of compliance because income taxpayers earn is transparent to them and IRS. In both cases, high levels of compliance tend to be maintained over time. Also, because through withholding and information reporting IRS can better identify noncompliant taxpayers and prioritize contacting them by the potential for additional revenue, these tools can enable IRS to better allocate its resources. However, designing new withholding or information reporting requirements to address underreporting can be challenging given that many types of income are already subject to at least some form of withholding or information reporting, there are varied forms of underreporting, and the requirements could impose costs and burdens on third parties. Taxpayers tend to report income subject to tax withholding or information reporting with high levels of compliance, as shown in figure 1, because the income is transparent to the taxpayers as well as to IRS. Additionally, once withholding or information reporting requirements are in place for particular types of income, compliance tends to remains high over time. For example, for wages and salaries, which are subject to tax withholding and substantial information reporting, the percentage of income that taxpayers misreport report has consistently been measured at around 1 percent over time. In the past, we have identified a few specific areas where additional withholding or information reporting requirements could serve to improve compliance: Require more data on information returns dealing with capital gains income from securities sales. Recently, we reported that an estimated 36 percent of taxpayers misreported their capital gains or losses from the sale of securities, such as corporate stocks and mutual funds. Further, around half of the taxpayers who misreported did so because they failed to report the securities’ cost, or basis, sometimes because they did not know the securities’ basis or failed to take certain events into account that required them to adjust the basis of their securities. When taxpayers sell securities like stock and mutual funds through brokers, the brokers are required to report information on the sale, including the amount of gross proceeds the taxpayer received; however, brokers are not required to report basis information for the sale of these securities. We found that requiring brokers to report basis information for securities sales could improve taxpayers’ compliance in reporting their securities gains and losses and help IRS identify noncompliant taxpayers. However, we were unable to estimate the extent to which a basis reporting requirement would reduce the capital gains tax gap because of limitations with the compliance data on capital gains and because neither IRS nor we know the portion of the capital gains tax gap attributed to securities sales. Requiring tax withholding and more or better information return reporting on payments made to independent contractors. Past IRS data have shown that independent contractors report 97 percent of the income that appears on information returns, while contractors that do not receive these returns report only 83 percent of income. We have also identified other options for improving information reporting for independent contractors, including increasing penalties for failing to file required information returns, lowering the $600 threshold for requiring such returns, and requiring businesses to report separately on their tax returns the total amount of payments to independent contractors. IRS’s Taxpayer Advocate Service recently recommended allowing independent contractors to enter into voluntary withholding agreements. Requiring information return reporting on payments made to corporations. Unlike payments made to sole proprietors, payments made to corporations for services are generally not required to be reported on information returns. IRS and GAO have contended that the lack of such a requirement leads to lower levels of compliance for small corporations. Although Congress has required federal agencies to provide information returns on payments made to contractors since 1997, payments made by others to corporations are generally not covered by information returns. The Taxpayer Advocate Service has recommended requiring information reporting on payments made to corporations, and the administration’s fiscal year 2007 budget has proposed requiring additional information reporting on certain good and service payments by federal, state, and local governments. In addition to improving taxpayer compliance, information reporting can help IRS to better allocate its resources to the extent that it helps IRS better identify noncompliant taxpayers and the potential for additional revenue that could be obtained by contacting these taxpayers. For example, IRS officials told us that receiving information on basis for taxpayers’ securities sales would allow IRS to determine more precisely taxpayers’ income for securities sales through its document matching programs and would allow it to identify which taxpayers who misreported securities income have the greatest potential for additional tax assessments. Similarly, IRS could use basis information to improve both aspects of its examination program—examinations of tax returns through correspondence and examinations of tax returns face-to-face with the taxpayer. Currently, capital gains issues are too complex and time consuming for IRS to examine through correspondence. However, IRS officials told us that receiving cost basis information might enable IRS to examine noncompliant taxpayers through correspondence because it could productively select tax returns to examine. Also, having cost basis information could help IRS identify the best cases to examine face-to-face, making the examinations more productive while simultaneously reducing the burden imposed on compliant taxpayers who otherwise would be selected for examination. As a result of all these benefits, basis reporting would allow IRS to better allocate its resources that focus on securities misreporting across its enforcement programs. Although withholding and information reporting lead to high levels of compliance, designing new requirements to address underreporting could be challenging given that many types of income, including wages and salaries, dividend and interest income, and income from pensions and Social Security are already subject to withholding or substantial information reporting. Also, there are challenges involved with establishing new withholding or information reporting requirements for certain other types of income where there is extensive underreporting of income. Challenges exist because taxable income may be difficult to determine because of complex tax laws, complex transactions, or the lack of a practical and reliable third-party source to provide the information. For example, with regard to reporting securities basis information, we reported that it would be difficult for brokers to report information for some types of transactions because of complex tax laws and that representatives from the securities industry told us that a set of rules would need to be developed to establish clearly what types of transactions would be subject to any reporting requirement. Likewise, a persistent and large part of the tax gap relates to nonfarm sole proprietor and informal supplier income. As shown in figure 1, this income is not subject to information reporting, and these taxpayers misreported about half of the income they earned for tax year 2001. Although establishing withholding or information reporting requirements for these forms of income would likely improve taxpayers’ compliance, practical and effective information reporting mechanisms are difficult to identify. For example, informal suppliers by definition receive income in an informal manner through services they provide to a variety of individual citizens or small businesses. Whereas businesses may have the capacity to perform withholding and information reporting functions for their employees, it may be challenging to extend withholding or information reporting responsibilities to the individual citizens that receive services, who may not have the resources or knowledge to comply with such requirements. Consequently, innovative approaches likely will be needed if tools like withholding and information returns are to be extended to cover more sources of the tax gap. Finally, implementing tax withholding and information reporting requirements generally imposes costs and burdens on the businesses that must implement them, and, in some cases, on taxpayers. For example, expanding information reporting on securities sales to include basis information will impose costs on the brokers that would track and report the information. Further, trying to close the entire tax gap with these enforcement tools could entail more intrusive recordkeeping or reporting than the public is willing to accept. Considering these costs and burdens should be part of any evaluation of additional withholding or information reporting requirements. Although I have focused on information reporting and tax withholding, I want to mention one other enforcement tool that can potentially deter noncompliance, which is the use of penalties for filing inaccurate or late tax and information returns. Congress has placed a number of civil penalty provisions in the tax code. However, as with civil penalties related to other federal agencies, inflation may have weakened the deterrent effect of IRS penalties. For example, the Treasury Inspector General for Tax Administration has noted that the $50 per partner per month penalty for a late-filed partnership tax return, established by Congress in 1978, would equate to $17.22 in 2004 dollars. In its fiscal year 2007 budget, the administration has proposed expanding penalty provisions applicable to paid tax return preparers to include non-income tax returns and related documents. In addition, Congress recently increased certain penalties related to tax shelters and other tax evasion techniques. Given Congress’s recent judgment that some tax penalties were too low and concerns that inflation may have weakened the effectiveness of the civil penalty provisions in the tax code, additional increases may need to be considered to ensure that all penalties are of sufficient magnitude to deter tax noncompliance. Devoting more resources to enforcement has the potential to help reduce the tax gap by billions of dollars in that IRS would be able to expand its enforcement efforts to reach a greater number of potentially noncompliant taxpayers. However, determining the appropriate level of enforcement resources to provide IRS requires taking into account many factors, such as how effectively and efficiently IRS is currently using its resources, how to strike the proper balance between IRS’s taxpayer service and enforcement activities, and competing federal funding priorities. If Congress were to provide IRS more enforcement resources, the amount of the tax gap that could be reduced depends in part on the size of any increase in IRS’s budget, how IRS would manage any additional resources, and the indirect increase in taxpayers’ voluntary compliance that would likely result from expanded IRS enforcement. As I previously mentioned, IRS is able to secure tens of billions of dollars in tax revenue from noncompliant taxpayers it identifies through its various enforcement programs. However, given resource constraints, IRS is unable to contact millions of additional taxpayers for whom it has evidence on potential noncompliance. With additional resources, IRS would be able to assess and collect additional taxes and further reduce the tax gap. In 2002, IRS estimated that a $2.2 billion funding increase would allow it to take enforcement actions against potentially noncompliant taxpayers it identifies but cannot contact and would yield an estimated $30 billion in revenue. For example, IRS estimated that it contacted about 3 million of the over 13 million taxpayers it identified as potentially noncompliant through its matching of tax returns to information returns. IRS estimated that contacting the additional 10 million potentially noncompliant taxpayers it identified, at a cost of about $230 million, could yield nearly $7 billion in potentially collectible revenue. However, we did not evaluate the accuracy of the estimate, and as will be discussed below, many factors suggest that it is difficult to estimate reliably net revenue increases that might come from additional enforcement efforts. Although additional enforcement funding has the potential to reduce the tax gap, the extent to which it would help depends on several factors. First, and perhaps most obviously, the amount of tax gap reduction would depend in part on the size of any budget increase. Generally, larger budget increases should result in larger reductions in the tax gap. IRS prioritizes the cases of potentially noncompliant taxpayers it reviews through its enforcement programs based on factors, such as the likelihood that a taxpayer is noncompliant, the potential amount of additional taxes that could be assessed, and collection potential. As such, it is likely that IRS would begin to experience diminishing returns as it began to review additional, lower priority cases of potentially noncompliant taxpayers. Given the diminishing returns IRS would likely experience as it moves to working less and less productive cases, the amount of expected reduction in the tax gap for each additional dollar of funding would decline. Further, reductions in the tax gap that could be derived from additional enforcement funding may not be immediate. The reductions may occur gradually as IRS is able to hire and train enforcement personnel. Recently, IRS obtained some additional funding targeted for enforcement activities that it estimated will result in additional revenue. In its fiscal year 2006 budget request, IRS requested millions of dollars to expand its tax return examination and tax collection activities with the goal of increasing individual taxpayer compliance and addressing concerns raised by GAO and others regarding the erosion of IRS’s enforcement presence and the continued growth in noncompliance. In estimating the revenue that it would obtain from the increased funding, IRS took several factors into account, including opportunity costs because of training, which draws experienced enforcement personnel away from the field; differences in average enforcement revenue obtained per full-time employee by enforcement activity; and differences in the types and complexity of cases worked by new hires and experienced hires. IRS forecasted that in the initial year after expanding enforcement activities, the additional revenue it expects to collect is less than half the amount it expects to collect annually in later years. This example underscores the logic that if IRS is to receive a relatively large funding increase, it likely would be better to provide it in small but steady amounts. The amount of tax gap reduction likely to be achieved from any budget increase Congress may choose to provide also depends on how well IRS can manage the additional resources. As previously mentioned, IRS does not have compliance data for some segments of the tax gap and others are based on old data. Periodic measurements of compliance levels can indicate the extent to which compliance is improving or declining and provide a basis for reexamining existing programs and triggering corrective actions, if necessary. Also, regardless of the type of noncompliance, IRS has concerns with its information on whether taxpayers unintentionally or intentionally fail to comply with the tax laws. Knowing the reasons why taxpayers are noncompliant can help IRS decide whether its efforts to address specific areas of noncompliance should focus on nonenforcement activities, such as improved forms or publications, or enforcement activities to pursue intentional noncompliance. For those portions of the tax gap that rely on old data and where IRS does not know the reason for taxpayers’ noncompliance, IRS may be less able to target resources efficiently to achieve the greatest tax gap reduction at the least burden to taxpayers. As part of an effort to make the best use of its enforcement resources, IRS has developed rough measures of return on investment in terms of tax revenue that it assesses from uncovering noncompliance. Generally, IRS cites an average return on investment for enforcement of 4:1, that is, IRS estimates that it collects $4 in revenue for every $1 of funding. Where IRS has developed return on investment estimates for specific programs, it finds substantial variation depending on the type of enforcement action. For instance, the ratio of estimated tax revenue gains to additional spending for pursuing known individual tax debts through phone calls is 13:1 versus a ratio of 32:1 for matching the amount of income taxpayers report on their tax returns to the income amounts reported on information returns. However, in addition to current returns on investment estimated being rough, IRS also lacks information on the incremental returns on investment for some enforcement programs. Developing such measures is difficult because of incomplete information on all the costs and all the tax revenue ultimately collected from specific enforcement efforts. Because IRS’s current estimates of the revenue effects of additional funding are imprecise, the actual revenue that might be gained from expanding differing enforcement efforts is subject to uncertainty. Given the variation in estimated returns on investment for differing types of IRS compliance efforts, the amount of tax gap reduction that may be achieved from an increase in IRS’s resources would depend on IRS’s decisions about how to allocate the increase. Although it might be tempting to allocate resources heavily toward those areas with the highest estimated return, allocation decisions must take into account diverse and difficult issues. For instance, although one enforcement activity may have a high estimated return, that return may drop off quickly as IRS works its way through potential noncompliance cases. In addition, IRS dedicates examination resources across all types of taxpayers so that all taxpayers receive some signal that noncompliance is being addressed. Further, issues of fairness can arise if IRS focuses its efforts only on particular groups of taxpayers. Importantly, expanded enforcement efforts could reduce the tax gap more than through direct tax revenue collection, as widespread agreement exists that IRS enforcement programs have an indirect effect through increases in voluntary tax compliance. The precise magnitude of the indirect effects of enforcement is not known with a high level of confidence given challenges in measuring compliance; developing reasonable assumptions about taxpayer behavior; and accounting for factors outside of IRS’s actions that can affect taxpayer compliance, such as changes in tax law. However, several research studies have offered insights to help better understand the indirect effects of IRS enforcement on voluntary tax compliance and show that they could exceed the direct effect of revenue obtained. Although closing the entire tax gap is neither feasible nor desirable due to costs and intrusiveness, reducing the tax gap is worthwhile for many reasons, including fairness to those who are compliant and also because it is a means to improve our nation’s fiscal position. Each of the three approaches I have discussed could make a contribution to reducing the tax gap, although using multiple approaches may be the most effective strategy since no one approach is likely to address noncompliance fully and cost effectively. However, in deciding on one or more of the three broad approaches to use, many factors or issues could affect strategic decisions. Among the broad factors to consider are the likely effectiveness of any approach, fairness, enforceability, and sustainability. Beyond these, our work points to the importance of the following: Measuring compliance levels periodically. Regularly measuring the magnitude of, and the reasons for, noncompliance provides insights on how to reduce the gap through potential changes to tax laws and IRS programs. In July 2005, we recommended that IRS periodically measure tax compliance, identify reasons for noncompliance, and establish voluntary compliance goals. IRS agreed with the recommendations and established a voluntary tax compliance goal of 85 percent by 2009. In terms of measuring tax compliance, we have also identified alternative ways to measure compliance, including conducting examinations of small samples of tax returns over multiple years, instead of conducting examinations for a larger sample of returns for one tax year, to allow IRS to track compliance trends annually. Leveraging technology. Better use of technology could help IRS be more efficient in reducing the tax gap. IRS is modernizing its technology, which has paid off in terms of telephone service, resource allocation, electronic filing, and data analysis capability. However, this ongoing modernization will need strong management and prudent investments to maximize potential efficiencies. Considering the costs and burdens. Any action to reduce the tax gap will create costs and burdens for IRS; taxpayers; and third parties, such as those who file information returns. As discussed earlier, for example, withholding and information reporting requirements impose some costs and burdens on those that track and report information. These costs and burdens need to be reasonable in relation to the improvements expected to arise from new compliance strategies. Optimizing resource allocation. As previously discussed, developing reliable measures of the return on investment for strategies to reduce the tax gap would help inform IRS resource allocation decisions. IRS has rough measures of return on investment based on the additional taxes it assesses. Developing such measures is difficult because of incomplete data on the costs of enforcement and collected revenues. Beyond direct revenues, IRS’s enforcement actions have indirect revenue effects, which are difficult to measure. However, indirect effects could far exceed direct revenue effects and would be important to consider in connection with continued development of return on investment measures. Evaluating the results. Evaluating the actions taken by IRS to reduce the tax gap would help maximize IRS’s effectiveness. Evaluations can be challenging because it is difficult to isolate the effects of IRS’s actions from other influences on taxpayers’ compliance. Our work has discussed how to address these challenges, for example by using research to link actions with the outputs and desired effects. When taxpayers do not pay all of their taxes, honest taxpayers carry a greater burden to fund government programs and the nation is less able to address its long-term fiscal challenges. Thus, reducing the tax gap is important, even though closing the entire tax gap is neither feasible nor desirable because of costs and intrusiveness. All of the approaches I have discussed have the potential to reduce the tax gap alone or in combination, and no one approach is clearly and always superior to the others. As a result, IRS needs a strategy to attack the tax gap on multiple fronts with multiple approaches. Mr. Chairman and Members of the Subcommittee, this concludes my testimony. I would be happy to answer any question you may have at this time. For further information on this testimony, please contact Michael Brostek on (202) 512-9110 or brostekm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals making key contributions to this testimony include Tom Short, Assistant Director; Jeff Arkin; Cheryl Peterson; and Jeff Procak. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The tax gap--the difference between the tax amounts taxpayers pay voluntarily and on time and what they should pay under the law--has been a long-standing problem in spite of many efforts to reduce it. Most recently, the Internal Revenue Service (IRS) estimated a gross tax gap for tax year 2001 of $345 billion and estimated it would recover $55 billion of this gap, resulting in a net tax gap of $290 billion. When some taxpayers fail to comply, the burden of funding the nation's commitments falls more heavily on compliant taxpayers. Reducing the tax gap would help improve the nation's fiscal stability. For example, each 1 percent reduction in the net tax gap would likely yield $3 billion annually. GAO was asked to discuss the tax gap and various approaches to reduce it. This testimony discusses to what extent the tax gap could be reduced through three approaches--simplifying or reforming the tax system, providing IRS with additional enforcement tools, and devoting additional resources to enforcement--as well as various factors that could guide decision-making when devising a strategy to reduce the tax gap. This statement is based on prior GAO work. Simplifying the tax code or fundamental tax reform has the potential to reduce the tax gap by billions of dollars. IRS has estimated that errors in claiming tax credits and deductions for tax year 2001 contributed $32 billion to the tax gap. Thus, considerable potential exists. However, these provisions serve purposes Congress has judged to be important and eliminating or consolidating them could be complicated. Fundamental tax reform would be most likely to result in a smaller tax gap if the new system has few, if any, exceptions (e.g., few tax preferences) and taxable transactions are transparent to tax administrators. These characteristics are difficult to achieve, and any tax system could be subject to noncompliance. Withholding and information reporting are particularly powerful tools to reduce the tax gap. They could help reduce the tax gap by billions of dollars, especially if they can make currently underreported income transparent to IRS. These tools have been shown to lead to high, sustained levels of taxpayer compliance. Using these tools can also help IRS better allocate its resources to the extent they help IRS identify and prioritize its contacts with noncompliant taxpayers. As GAO previously suggested, reporting the cost, or basis, of securities sales is one option to improve taxpayers' compliance. However, designing additional withholding and information reporting requirements may be challenging given that many types of income are already subject to reporting, there are many forms of underreporting, and withholding and reporting requirements impose costs on third parties. Devoting additional resources to enforcement has the potential to help reduce the tax gap by billions of dollars. However, determining the appropriate level of enforcement resources for IRS requires taking into account many factors such as how well IRS is currently using its resources, how to strike the proper balance between IRS's taxpayer service and enforcement activities, and competing federal funding priorities. If Congress decides to provide IRS more enforcement resources, the amount the tax gap could be reduced would depend on factors such as the size of budget increases, how IRS manages any additional resources, and the indirect increase in taxpayers' voluntary compliance resulting from expanded enforcement. Increasing IRS's funding would enable it to contact millions of potentially noncompliant taxpayers it identifies but does not have resources to contact. Finally, using multiple approaches may be the most effective strategy to reduce the tax gap, as no one approach is likely to fully and cost effectively address noncompliance. Key factors to consider in devising a tax gap reduction strategy include periodically measuring noncompliance and its causes, setting reduction goals, leveraging technology, optimizing IRS's allocation of resources, and evaluating the results of any initiatives.
The Bank Insurance Fund (BIF) has substantially rebuilt its reserves over the last 3 years from a deficit position at the end of calendar year 1991. The Savings Association Insurance Fund (SAIF), while also building its reserves, is doing so at a significantly slower rate. The Congress, administration, savings association trade groups, regulators, and other interested parties have expressed concern that a significant disparity in premium rates between BIF and SAIF could develop when BIF is fully recapitalized if the Federal Deposit Insurance Corporation (FDIC) lowers BIF’s premium rates. They are concerned that a significant insurance premium rate differential could put SAIF-insured institutions at a competitive disadvantage with their BIF-insured counterparts. They believe that this, in turn, could have serious implications for the long-term viability of the industry and its insurance fund. Pursuant to the June 10, 1994, request of the now Chairman of the Senate Committee on Banking, Housing, and Urban Affairs and the now Ranking Minority Member of the House Committee on Small Business, we undertook a review of the issues related to the likelihood that an insurance premium rate differential would develop between bank and thrift institutions and the potential impact of such a differential on the banking and thrift industries and their respective insurance funds. During the 1980s, the savings and loan industry experienced severe financial difficulties, and the deterioration of the industry’s financial condition overwhelmed the resources of its deposit insurance fund, the Federal Savings and Loan Insurance Corporation (FSLIC). By 1988, the condition of the industry and its insurance fund had reached crisis proportions. At December 31, 1988, FSLIC reported a deficit of $75 billion. The Financing Corporation (FICO) was established in 1987 to recapitalize FSLIC. FICO was funded mainly through the issuance of public debt offerings, which were limited to $10.8 billion. The net proceeds of FICO’s debt offerings were used to purchase capital stock and capital certificates issued by FSLIC—in effect, providing capital to FSLIC. FICO was authorized to assess FSLIC-insured institutions for the annual interest expense on the obligations issued, as well as for bond issuance and custodial costs. The industry’s problems, however, required far more funding than that provided through FICO. In response to the thrift crisis, the Financial Institutions Reform, Recovery, and Enforcement Act of 1989 (FIRREA) was enacted. FIRREA abolished FSLIC and created the Resolution Trust Corporation (RTC) to manage and resolve all troubled savings institutions that were previously insured by FSLIC and for which a conservator or receiver was appointed during the period January 1, 1989, through August 8, 1992. FIRREA also provided RTC with an initial $50 billion for the cost of resolving these institutions. FIRREA created a new insurance fund for thrifts—the Savings Association Insurance Fund, retitled the insurance fund for banks—the Bank Insurance Fund, and designated FDIC as sole insurer of all banks and savings associations and administrator of the insurance funds. FIRREA authorized FICO, with the approval of the FDIC Board of Directors, to assess SAIF-member savings associations to cover its interest payments, issuance costs, and custodial fees. Subsequently, the RTC Refinancing, Restructuring, and Improvement Act terminated FICO’s authority to issue bonds, but it did not modify FICO’s authority to assess SAIF members to cover its annual interest expense, which will continue until the 30-year recapitalization bonds mature in the years 2017 through 2019. FIRREA provided that the amount of FICO’s assessment was not to exceed the amount authorized to be assessed SAIF members by FDIC for insurance premiums, and that FICO’s assessment was to be deducted from the amount FDIC was authorized to assess SAIF members. FIRREA and subsequent legislation also amended the Federal Deposit Insurance Act (FDI Act), particularly with respect to insurance assessments. Under the FDI Act, as amended, the FDIC Board of Directors is to set semiannual insurance premium rates for SAIF and BIF independently. Further, the Board is to set such rates for SAIF to increase SAIF’s reserve ratio to the designated reserve ratio and, once SAIF attains the designated reserve ratio, to maintain SAIF’s reserve ratio at the designated reserve ratio. In setting insurance premium rates, the Board of Directors is required to consider the Fund’s expected operating expenses, case resolution expenditures and income, the effect of assessments on members’ earnings and capital, and any other factors that the Board of Directors may deem appropriate. The FDI Act, as amended, establishes a designated reserve ratio of 1.25 percent for both BIF and SAIF so that both funds build reserves sufficient to withstand the pressures of any substantial financial institution failures in the future. FDIC’s Board of Directors must set insurance premium rates at a level that will enable each fund to build its reserves to reach this ratio. The fund capitalization provisions added to the FDI Act by the FDIC Improvement Act of 1991 (FDICIA) required FDIC to establish a recapitalization schedule for BIF to achieve the designated reserve ratio not later than 15 years after implementation and to set insurance assessments in accordance with this schedule. Until January 1, 1998, FDIC must set SAIF’s insurance premium rates at a level that will enable SAIF to achieve the designated reserve ratio within a reasonable period of time. FDIC’s Board of Directors has the authority to lower SAIF premiums to an average annual rate of 18 basis points until January 1, 1998. After January 1, 1998, FDIC must set premium rates for SAIF to meet the designated reserve ratio according to a 15-year schedule. FDIC may extend the date specified in the schedule to a later date that it determines will, over time, maximize the amount of insurance premiums received by SAIF, net of insurance losses incurred. FDIC currently projects that BIF will reach the 1.25 percent designated reserve ratio during 1995, and SAIF is projected to attain its ratio in 2002. As of December 31, 1994, BIF had unaudited reserves of $21.8 billion, representing approximately 1.16 percent of insured deposits. As of the same date, SAIF had unaudited reserves of $1.9 billion, representing approximately 0.27 percent of insured deposits. Currently, BIF-insured institutions are assessed insurance premiums at a rate averaging 23 cents for every $100 in deposits subject to assessments (23 basis points), while SAIF-insured institutions are assessed at premium rates averaging 24 cents for every $100 of assessable deposits (24 basis points). SAIF was created without any initial capital, and from SAIF’s inception through December 31, 1992, FICO, the Resolution Funding Corporation (REFCORP), and the FSLIC Resolution Fund (FRF) had prior claim on a substantial portion of SAIF members’ insurance premiums. During the period 1989 through 1993, approximately $6.4 billion, or 84 percent of SAIF’s insurance premiums, were used to fund the priority claims of FICO, REFCORP, and FRF. Beginning in 1993, only FICO continued to have prior claim on SAIF members’ insurance premiums, with SAIF receiving the remaining amount. In 1993, FICO received $779 million, which represented approximately 46 percent of SAIF’s total insurance premiums for that year. To address the problem of SAIF’s capitalization in light of the other claims on its insurance premiums, the FDI Act, as amended by FIRREA, provided for two types of supplemental funding from the Treasury—backup funding for SAIF insurance premiums and payments to maintain a minimum fund balance. As subsequently amended by the RTC Refinancing, Restructuring, and Improvement Act of 1991, these provisions required the Treasury to provide funding to SAIF each fiscal year from 1993 to 2000 to the extent that the SAIF-member insurance premiums deposited in the Fund did not total $2 billion a year. This would have assured SAIF of at least $16 billion in either premium income or Treasury payments. In addition, Treasury was authorized to make annual payments necessary to ensure that SAIF had a specific net worth, ranging from zero during fiscal year 1992 to $8.8 billion during fiscal year 2000. The cumulative amounts of these payments were also not to exceed $16 billion. The FDI Act, as amended, also authorized funds to be appropriated to the Secretary of the Treasury for purposes of these payments. However, none of the funds authorized were actually appropriated. The funding provisions contained in the FDI Act were again amended in December 1993 by the RTC Completion Act. The amendments authorize Treasury payments of up to $8 billion to SAIF for insurance losses incurred in fiscal years 1994 through 1998. Additionally, before any funds can be made available to SAIF for this purpose, FDIC must certify to the Congress, among other things, that (1) SAIF-insured institutions are unable to pay premiums sufficient to cover insurance losses and to meet a repayment schedule for any amount borrowed from the Treasury for insurance purposes under the FDI Act, as amended, without adversely affecting their ability to raise and maintain capital or to maintain the assessment base and (2) an increase in premiums could reasonably be expected to result in greater losses to the government. The RTC Completion Act also makes available to SAIF any of the RTC’s unused loss funding to cover insurance losses during the 2-year period beginning on the date of RTC’s termination. However, SAIF’s use of this funding is subject to restrictions similar to those of the Treasury funding authorized under the act. Additionally, FDICIA provided SAIF a mechanism for funding insurance losses. Specifically, FDICIA authorized FDIC to borrow up to $30 billion from the Treasury, on behalf of SAIF or BIF, for insurance purposes. No borrowing has yet occurred, however, BIF or SAIF would have to repay any amounts borrowed from the Treasury with premium revenues. Also, FDIC would have to provide the Treasury with a repayment schedule demonstrating that future premium revenue would be adequate to repay any amounts borrowed plus interest. Additionally, the amount of such borrowings is further restricted by a formula limiting each fund’s total obligations. At the time FIRREA was enacted, the administration projected annual thrift deposit growth of 6 to 7 percent. Under this assumption, the annual FICO interest obligation would have accounted for 7 basis points (29 percent) of the 24 basis points charged annually for SAIF premiums. Since SAIF’s inception, however, total SAIF deposits have declined an average of 5 percent annually, from $948 billion in 1989 to $711 billion in 1994. As a result, the annual FICO interest obligation is being spread over a smaller than anticipated assessment base. Thus, the FICO interest obligation represents a significantly higher proportion of the assessment rate and the premiums paid by SAIF members than originally assumed. Another factor which exacerbates the problem of shrinkage in SAIF’s assessment base is the growth of a segment of the SAIF assessment base whose premiums may not be used to fund the FICO interest obligation. This segment of SAIF’s assessment base includes deposits which have been acquired by BIF members from SAIF members, and former savings associations that have converted to bank charters while retaining SAIF membership. Thrift deposits acquired by BIF members, referred to as “Oakar” deposits, retain SAIF insurance coverage, and the acquiring institution pays insurance premiums to SAIF for these deposits at SAIF’s premium rates. However, because the institution acquiring these deposits is not a savings association and remains a BIF member as opposed to a SAIF member, the insurance premiums it pays to SAIF, while available to capitalize the Fund, are not available to service the FICO interest obligation. When the acquisition occurs, FDIC establishes a ratio of BIF-insured deposits to SAIF-insured deposits for the BIF member acquiring institution. This ratio remains constant for the institution in the event of subsequent deposit growth or shrinkage. Similarly, premiums paid by SAIF-member savings associations that have converted to bank charters, referred to as “Sasser” institutions, are unavailable to fund the FICO interest obligation since the institutions are banks as opposed to savings associations. Currently, SAIF-insured institutions cannot voluntarily change or convert their membership from SAIF to BIF. The FDI Act, as amended, contains a moratorium on conversions from SAIF to BIF except in limited cases where (1) the conversion transaction affects an insubstantial portion of the total deposits of the institution as determined by FDIC and (2) the conversion occurs in connection with the acquisition of a SAIF member in default or in danger of default and FDIC determines that the benefits to SAIF or RTC equal or exceed FDIC’s estimate of the loss of insurance premium income over the remaining balance of the moratorium period and RTC concurs with FDIC’s determination. Once SAIF is fully capitalized, the moratorium on conversions will be lifted. However, institutions converting their membership will be subject to substantial entrance and exit fees. As directed by the requesters’ June 10, 1994, letter, our objectives were to (1) determine the likelihood, potential size, and timing of a differential in premium rates between BIF- and SAIF-insured institutions, (2) analyze possible effects of the premium rate differential on the thrift and banking industries, (3) assess potential threats to SAIF’s viability, and (4) present various policy options to avoid or mitigate problems which a premium rate differential may create. As agreed with the requesters, we did not analyze the potential effects of the premium rate differential on the availability of housing finance. To address the above questions, we obtained background information and data from officials at FDIC, the Office of Thrift Supervision (OTS), the Board of Governors of the Federal Reserve System, the Federal Housing Finance Board, and the Department of the Treasury. We also met with officials at the Savings and Community Bankers Association, the California League of Savings Institutions, the Savings Association Insurance Fund Industry Advisory Committee (SAIFIAC), the American Bankers Association, and other knowledgeable parties who provided us with information and their perspectives. For our analyses, we relied on FDIC’s projected capitalization schedules for BIF and SAIF, and detailed financial data for SAIF-member institutions. We also relied on information reported by FDIC regarding troubled thrifts and potential future failures. We verified that key beginning figures in FDIC’s capitalization schedules were reasonable in relation to BIF’s and SAIF’s financial statements; however, we did not audit the data presented in the schedules. Also, we did not audit the detailed financial data for SAIF members provided by FDIC, nor did we audit the information regarding troubled thrifts and potential future failures reported by FDIC. In order to determine the likelihood, potential size, and timing of a differential in premium rates between BIF- and SAIF-insured institutions and to assess the future outlook for SAIF, we identified the major assumptions underlying FDIC’s projected capitalization schedules for BIF and SAIF. We considered the potential effects of major uncertainties associated with these assumptions as well as other uncertainties affecting the duration of a differential in premium rates. We also analyzed the effects of various institution failure rates on SAIF’s ability to attain its designated reserve ratio. Additionally, we analyzed the effects of shrinkage in the portion of SAIF’s assessment base available to pay FICO on SAIF’s ability to finance the annual interest obligation to FICO’s bondholders. In order to analyze the possible effects of the premium rate differential on the thrift and banking industries, we developed economic scenarios as a framework to forecast the potential magnitude of the impact of FDIC’s projected premium differential. We used this approach due to the lack of reliable statistical estimates of the likely behavioral responses of banks and thrifts resulting from a differential in premium rates. Using detailed financial data for SAIF members on a national level, we converted the premium differential into a cost increase for SAIF members. We also analyzed data for SAIF-member institutions in California, a state with a significant level of thrift assets. In our calculations, we used FDIC’s projected premium rate differential between BIF and SAIF. We used information gained throughout the assignment to present various options available for mitigating or avoiding the potential problems associated with a premium differential between BIF and SAIF. We altered assumptions in FDIC’s BIF and SAIF projection schedules to correspond with some of the options presented. We conducted our work in Washington, D.C., from August 1994 through February 1995 in accordance with generally accepted government auditing standards. FDIC, OTS, and the Department of the Treasury provided written comments on a draft of this report. These comments have been incorporated, as appropriate, throughout this report, and are reprinted in appendixes I through III. A significant differential in premium rates charged by BIF and SAIF will develop in 1995, if FDIC lowers rates for BIF members immediately after BIF reaches its designated reserve ratio in 1995. FDIC projections indicate that, beginning in 1996, SAIF’s premium rates will be more than five times the rate of BIF premiums until SAIF’s projected capitalization in the year 2002. The premium rate differential could continue for the duration of the FICO interest obligation if SAIF-insured thrifts continue to be assessed at rates sufficient to pay the interest on the FICO bonds. Significant uncertainties exist with respect to key assumptions in FDIC’s projection schedules, including institution failure and loss assumptions, and future shrinkage in the portion of SAIF’s deposit base available to fund the FICO interest obligation. These factors could affect SAIF’s capitalization date and future premium rates. FDIC’s current projections for BIF indicate that BIF will attain its designated ratio of reserves to insured deposits of 1.25 percent in 1995. Given the Fund’s current condition and short-term outlook, it is fairly certain that BIF will achieve the designated reserve ratio in 1995. In response to the Fund’s rapid improvement and its current outlook, on January 31, 1995, FDIC’s Board of Directors issued for public comment a proposal that would significantly reduce the average annual premium rates charged to BIF-insured institutions. FDIC’s Board of Directors could adjust BIF-member premium rates as early as the September 30, 1995, payment date to reflect the date in which the Fund achieves the designated reserve ratio. FDIC’s projections for SAIF indicate that SAIF will attain its designated reserve ratio in the year 2002, 7 years later than BIF. FDIC projects that BIF insurance premium rates will average 4 to 5 basis points (4 to 5 cents per $100 of deposits) after BIF reaches its designated reserve ratio. FDIC estimates that this rate will be sufficient to cover future insurance losses and maintain the Fund’s reserve ratio. In contrast, FDIC projects that SAIF’s premium rates will remain at an average of 24 basis points, more than five times the rate for BIF-insured institutions, until SAIF reaches its designated reserve ratio. (See figure 2.1.) Because of the potential magnitude of the differential in premium rates between BIF and SAIF that could develop under the Board’s proposal and the potential effects such a differential could have on thrifts and their insurance fund, the Director of OTS, at the January 31, 1995, FDIC Board meeting requested that the Board hold public hearings to discuss the issues and concerns raised by the Board’s proposal. We concur with the OTS Director’s request and believe such hearings would be a useful forum for examining the implications associated with the premium rate disparity that would develop under the Board’s proposal. Uncertainties inherent in the estimation process could result in the actual premium rate differential being significantly different from the projected differential in any given year. However, it is fairly certain that a period of high premium rate differentials will exist between BIF and SAIF until SAIF reaches its designated reserve ratio. Since FICO bonds were first issued in 1987, the thrift industry has paid assessments for the annual interest expense on FICO’s bonds. FDIC projections are that SAIF will achieve its designated reserve ratio in 2002 and that SAIF-insured thrifts will be assessed for FICO bond interest through that time. For purpose of our analyses, we assume that assessments of SAIF-insured thrifts for FICO bond interest will continue until the bonds mature in 2017 through 2019. If FICO assesses SAIF members to pay the annual FICO interest, using the assumptions underlying FDIC’s projections, annual assessment rates could be lowered to approximately 19 basis points after SAIF attains its designated reserve ratio. However, these rates would need to be gradually increased as the portion of SAIF’s assessment base available to pay FICO decreases. This would result in a substantial premium rate differential continuing through the liquidation of FICO bonds, while at the same time increasing the Fund’s reserve ratio to a level significantly higher than the designated reserve ratio. The premium rates for SAIF and the resulting differential could be even higher under scenarios where the portion of the SAIF assessment base available to pay FICO interest experiences significant shrinkage. FDIC official projections on assessments for SAIF-insured thrifts do not go beyond the year 2002 or otherwise address to what extent SAIF-insured thrifts may be assessed for FICO bond interest after SAIF achieves its designated reserve ratio. If SAIF-insured thrifts are not assessed for the FICO bond interest, FICO will be unable to pay the interest expense unless other funding mechanisms are made available. FDIC officials advised us that they will be examining this issue. In its comments on a draft of this report, FDIC stated that in setting SAIF premiums, it may consider FICO assessments and the effects of SAIF premiums on the ability of FICO to meet its obligations. However, FDIC’s comments also reflected the tension that FDIC may face at some future time between its duty to protect SAIF and FICO’s debt service requirements. SAIF’s ability to achieve its designated reserve ratio in 2002 as currently projected by FDIC is subject to significant uncertainties regarding assumed institution failure rates and associated losses used by FDIC in its projections. Long-range estimates of future thrift failures and losses associated with those failures are extremely uncertain. The health of the industry is subject to many variables which are extremely difficult to predict, such as changes in interest rates, the economy, and real estate markets. If financial institution failures and associated losses for SAIF are higher than those projected, SAIF may not achieve its designated reserve ratio in the time frame projected by FDIC. Because of the unprecedented nature of the thrift industry crisis, recent thrift failure and loss experience may not provide a sound basis for estimating future losses. Also, requirements for corporate governance and accounting reforms and prompt corrective action by regulators are intended to prevent such high levels of financial institution failures in the future and to limit the losses associated with those that do fail. For these reasons, FDIC used historical bank failure rates, rather than thrift failure rates, as a consideration in projecting future SAIF-institution failures. FDIC also considered current conditions in the thrift industry in projecting SAIF- institution failures. Additionally, FDIC used historical losses on failed bank assets to estimate SAIF’s future losses on failed institution assets. Because recent bank failure rates also may not provide a sound basis for projecting future failures due to recent, significant changes in the business and regulatory environments for financial institutions, FDIC adjusted the average of BIF’s failure rate over the last 20 years to arrive at the rate used in SAIF’s projections. The institution failure rates used in SAIF’s projections are about one-half the average bank failure rate over the last 20 years. Specifically, FDIC projected that, beginning in 1996, institutions holding approximately 0.22 percent of total industry assets will fail each year. (See figure 2.2.) FDIC projected that losses associated with the failures of such institutions will be 13 percent of their assets, which is approximately the average loss experience on failed bank assets over the last 20 years. However, the loss rates have fluctuated significantly from year to year, and future loss rates could be significantly different from those projected. (See figure 2.3.) In addition to the uncertainties associated with failure and loss rates, the rates used in FDIC’s projections are constant. As such, they spread the effects of business cycles across all of the years presented. Consequently, the effects of business cycles could cause actual insurance losses for any given year to vary significantly from what FDIC’s projections indicate. If SAIF experiences a higher level of failures than assumed by FDIC in its projections and all other factors are held constant, the Fund’s ability to capitalize by the year 2002 would be seriously jeopardized. As of September 30, 1994, FDIC reported in the FDIC Quarterly Banking Profile - Third Quarter 1994 that 62 SAIF members with $47 billion in assets were considered problem institutions, with financial, operational, or managerial weaknesses that threaten their continued financial viability. It is difficult to reliably predict the amount and timing of institution failures, even for problem institutions. Not all problem institutions ultimately fail; many, historically, have corrected conditions that caused regulatory concerns and strengthened their financial condition. Conversely, institutions not currently considered to be problem institutions could become troubled as a result of unfavorable changes in future economic conditions, including changes in interest rates and real estate markets. Currently, FDIC projections show that failures totaling 31 percent of the assets in the current group of SAIF-insured problem institutions are estimated to fail between 1996 and 2002, on which SAIF is projected to incur losses equal to 13 percent. If future failures are higher than projected and premium rates remain unchanged at the average annual rate of 24 basis points, SAIF’s capitalization could be delayed. (See table 2.1.) Another uncertainty affecting the projected institution failure and loss rates for SAIF is the potential effect of a premium rate differential on SAIF institutions. FDIC’s failed asset projections for SAIF do not explicitly consider the possible effects of a premium rate differential on thrift failures. FDIC projected an annual deposit shrinkage of 2 percent for the portion of SAIF’s deposit base available to service the annual FICO interest obligation. However, significant uncertainties exist regarding FDIC’s assumptions of changes in SAIF’s future assessment base. Since SAIF’s inception, both its total deposit base and the portion available to pay FICO have experienced significant shrinkage. With the pending significant differential between BIF and SAIF premium rates, the SAIF deposit base available to service FICO bond interest may decline by more than the 2-percent annual rate projected by FDIC. Currently, about 31 percent of SAIF’s assessment base belongs to institutions whose premiums are not subject to FICO assessments. About 24 percent of SAIF’s assessment base consists of Oakar deposits, which are held by BIF members, and about 7 percent is held by Sasser institutions, former savings associations that have converted to bank charters yet retain SAIF membership. As explained in chapter 1, the insurance premiums paid on these deposits cannot be used to pay FICO, since FICO’s assessment authority to pay its costs extends only to SAIF-member savings associations. SAIF’s total deposit base has declined by 25 percent since its inception, or an average decline of 5 percent each year, from $948 billion in 1989 to $711 billion in 1994. The portion of SAIF’s base available to pay FICO—the FICO assessment base—has experienced a decline of 48 percent since SAIF’s inception, or an average annual decline of almost 10 percent. (See figure 2.4.) It is difficult to predict future shrinkage in the portion of SAIF’s assessment base available to pay FICO. Growth in Oakar deposits from BIF-member acquisitions of thrift deposits causes shrinkage in the portion of SAIF’s assessment base available to pay FICO. The amount of Oakar deposits has grown rapidly since SAIF’s inception. Between 1990 and 1994, Oakar deposits have increased by $136 billion, to a total deposit base of $167 billion. Coupled with a decline in SAIF’s total deposit base, Oakar deposits have grown substantially as a portion of SAIF’s deposit base. Deposits in Sasser institutions, although significant, have not experienced substantial growth. Some of the past growth in Oakar deposits resulted from BIF-member institutions acquiring deposits from thrifts resolved by RTC. The unprecedented high number of thrift failures is unlikely to continue. However, it is not possible to predict future BIF-member acquisitions of thrift deposits due to voluntary shrinkage within the thrift industry. For example, in 1993 and 1994, the increase in Oakar deposits was significantly greater than the amount of deposits in institutions resolved by the RTC during this period. Consequently, it is difficult to predict future growth in Oakar deposits. Nonetheless, if SAIF’s Oakar deposits grow at only the 2-percent annual growth rate FDIC projects for BIF members, while the portion of SAIF’s assessment base available to pay FICO experiences the 2-percent annual decline projected by FDIC, the Oakar portion of SAIF’s assessment base will continue to increase in proportion to the Fund’s total assessment base. This would result in a continually decreasing portion of SAIF’s total annual premium income being available to service the FICO interest obligation. Changes in SAIF’s assessment base could also have a significant effect on the premium rates charged to institutions with SAIF-insured deposits. Assuming payments for the FICO interest obligation are included in SAIF’s premium rates, FDIC’s projections indicate that the portion of SAIF’s assessment base available to pay FICO cannot withstand significant shrinkage without FDIC having to increase insurance premium rates in order to fund the annual FICO interest obligation. The portion of SAIF’s assessment base available to pay FICO totaled about $500 billion at December 31, 1994. At the current assessment rate of 24 basis points, the base could shrink to approximately $325 billion before premium rates would need to be increased in order to pay the FICO obligation. Under FDIC’s assumptions of a 2-percent decline in the portion of SAIF’s base available to pay FICO and no future purchases of thrift deposits by BIF members, premiums would need to be increased in about the year 2012 in order to pay the FICO obligation. If the average of past SAIF deposit shrinkage and purchases of thrift deposits by BIF members were to continue, SAIF would need to increase rates in the year 2000 in order to raise enough funds to pay the FICO obligation. With the pending significant differential between BIF and SAIF premium rates, the SAIF deposit base is likely to continue declining in the foreseeable future. To reduce the burden of a significant cost disadvantage in relation to BIF members, SAIF members could place less reliance on deposits as a source of funding and turn to alternative sources, such as Federal Home Loan Bank advances and repurchase agreements. The differential could also accelerate deposit shrinkage within institutions, further reducing SAIF’s assessment base. This, in turn, could cause further increases in premium rates to fund the fixed FICO interest obligation. The future ability of SAIF-insured institutions to voluntarily convert from SAIF to BIF membership is another factor that could significantly impact SAIF’s future assessment base. Generally, institutions cannot currently convert their membership from SAIF to BIF until SAIF achieves its designated reserve ratio. Once SAIF reaches its reserve ratio, the moratorium in effect for conversions from SAIF to BIF membership will be lifted. Institutions converting from SAIF to BIF membership will pay an exit fee to SAIF and an entrance fee to BIF. Whether or not institutions will be motivated to voluntarily convert from SAIF to BIF when the moratorium is lifted will depend, in part, on the cost of the fixed FICO interest obligation in relation to the SAIF assessment base at the time. Given the fact that the premium rate differential could continue after SAIF’s capitalization for the duration of the FICO obligation, institutions could find it beneficial to convert their membership to avoid continued payment of higher premiums than those paid by BIF members. Therefore, institutions could be motivated to convert from SAIF to BIF membership based on cost. This voluntary conversion would cause further shrinkage in SAIF’s assessment base, which would make the fixed FICO obligation relatively more expensive for the shrinking base, in turn, causing additional shrinkage in the base. As of December 31, 1994, SAIF had unaudited reserves of $1.9 billion, representing approximately 0.27 percent of insured deposits, or 27 cents for every $100 in insured deposits. FDIC projects that SAIF’s reserves will gradually increase until SAIF reaches its designated reserve ratio in 2002, with approximately $8.0 billion in reserves. (See table 2.2.) To date, few demands have been placed on SAIF for resolution of failed institutions, since the primary responsibility for resolving failed thrifts has been with RTC. However, RTC’s authority to place failed thrifts into conservatorship expires on June 30, 1995, at which time SAIF will assume full responsibility for failures of SAIF-insured institutions. Currently, SAIF does not have a large capital cushion to absorb the cost of thrift failures. Although FDIC’s projections indicate that SAIF could manage the currently projected rate of failures, the failure of a single large institution or a higher than projected level of failures could delay SAIF’s capitalization and increase the risk of SAIF becoming insolvent. SAIF’s exposure will continue until its reserves are substantially increased. Although the condition of the thrift industry has substantially improved over the past few years, a large segment of the industry is still confronting weak economic conditions. The nation’s seven largest thrift institutions are headquartered in California and hold 23 percent of the industry’s assets. In general, California has lagged behind most of the nation in recovering from the most recent recession. Additionally, a few large institutions have raised supervisory concerns due to low earnings and relatively high levels of risk in their portfolios. Therefore, SAIF still faces significant exposure relative to its current level of reserves. Any delays in SAIF’s capitalization will only extend the period of risk associated with a thinly capitalized insurance fund. It should be noted, however, that the prompt corrective action provisions and regulatory requirements in FDICIA were designed to minimize losses to the insurance funds. The degree to which regulators exercise their regulatory and supervisory responsibilities under these provisions will thus be a significant factor in preventing or minimizing SAIF’s future insurance losses from thrift failures. A significant premium rate differential will develop in 1995 if FDIC lowers deposit insurance premium rates for BIF members after BIF reaches its designated reserve ratio, although the duration and magnitude of the rate differential are subject to significant uncertainties. FDIC’s projections indicate that significant premium rate differentials will exist between BIF and SAIF until SAIF’s capitalization. Although FDIC projects that SAIF will reach its designated reserve ratio in the year 2002, the timing is uncertain and could be affected by higher than projected insurance losses from failed institutions. Assuming SAIF-insured thrifts continue to be responsible for paying the FICO bond interest, the differential in premium rates will continue after SAIF’s capitalization for the duration of the FICO obligation. Accelerated shrinkage in the portion of SAIF’s assessment base available to pay FICO could also cause SAIF premiums to be even higher than the current average rate of 24 basis points. SAIF’s outlook is tenuous given the various uncertainties surrounding its exposure to insurance losses from future financial institution failures and changes in its assessment base, along with the impact of a significant premium rate disparity between its members and those of BIF. Because the fixed FICO obligation is significant in relation to the portion of SAIF’s assessment base whose premiums can be used to pay FICO, future shrinkage in SAIF’s assessment base, or additional purchases of thrift deposits by BIF members could affect SAIF members’ ability to pay the FICO obligation. SAIF’s premium rates could be higher than projected, causing the premium differential to be larger than currently projected. The higher premium rates could induce further shrinkage in SAIF’s assessment base and jeopardize future payment of the FICO interest obligation. The potential premium rate differential between BIF and SAIF discussed in chapter 2 is likely to have a significant impact on the banking and thrift industries’ costs and on their ability to attract deposits and capital. Reliable statistical estimates are not available to predict banks’ and thrifts’ responses to a premium rate differential. However, the lower cost of insurance coverage could motivate banks to increase interest rates paid on deposits and improve customer services in order to compete more aggressively for deposits. Thrifts would likely incur additional costs in their attempt to match bank actions and remain competitive with banks for deposits. Banks’ and thrifts’ actions and the impact of those actions on thrift industry earnings and capital will depend on the duration and amount of the premium differential, which are subject to the uncertainties discussed in chapter 2. The cost increase thrifts are likely to incur will represent a larger share of earnings for thrifts that depend heavily on deposits for funding and have low earnings. Additionally, the high premium rates for thrifts could motivate them to replace deposits with other nondeposit sources of funding in an effort to reduce the costs associated with the premium rate differential. Such action could result in a further shrinkage in SAIF’s assessment base and could lead to higher insurance premium rates charged by SAIF. Predicting BIF and SAIF member responses to a reduction in BIF premium rates cannot be done with a high degree of certainty because reliable statistical estimates of the likely behavior do not exist. Consequently, when analyzing the potential effects of the premium rate differential on the thrift and banking industries, it is necessary to make assumptions regarding bank and thrift behavior. The fact that banks and thrifts compete in a wide market that includes nondepository financial institutions contributes to the uncertainties in predicting banks’ responses to a decline in insurance premium rates. Competitive factors within the broader financial marketplace could determine whether banks use their reduction in insurance premiums to increase interest rates paid on deposits and increase customer service. Competition in the broader marketplace could also impact the portion of savings from reduced premiums that banks pass on to customers. If banks pass on all or part of their savings to customers, it is likely that SAIF members will match bank actions in order to remain competitive. The borrowing and lending activities of SAIF members have few unique characteristics in relation to BIF members that would help them remain competitive without matching bank actions. Commercial banks compete with thrift institutions in local mortgage origination markets and business lending, and both types of institutions compete for customer deposits to fund their activities. The portion of the premium reduction that banks pass through to depositors, as well as the extent of SAIF members’ attempts to match those actions, are both uncertain factors that will be significant in determining the actual cost increase to SAIF members resulting from the premium rate differential. Thrifts could potentially reduce these costs by replacing deposits with other nondeposit sources of funding. If banks do not pass on the benefits of their lower premium expenses to customers and instead use these benefits to directly increase earnings, the cost increase to SAIF members from the premium differential would be zero. If banks pass 100 percent of their reduction in insurance premiums through to their customers and SAIF members fully match banks’ actions, SAIF members would absorb 100 percent of the premium differential through their increased costs. Similarly, if banks pass 50 percent of their reduction in insurance premiums through to their customers and SAIF members fully match banks’ actions, SAIF members would absorb 50 percent of the premium differential through increased costs. If BIF members pass 50 percent of their savings associated with FDIC’s projected decline in premiums through to their customers and SAIF members fully match those actions, the cost increase for SAIF members on average would be about 4.8 percent of annual after-tax earnings, assuming a 19.5 basis point premium differential. The cost increase to SAIF members would be double if BIF members pass 100 percent of their savings through to customers and SAIF members fully match BIF-member actions. The cost increase as a percentage of earnings for individual SAIF members depends on their profitability, as well as the extent to which their assets are financed with assessable deposits. The median return on assets for SAIF members is about 100 basis points. Most SAIF-member assets are financed with 60 to 90 percent of assessable deposits. Under the 50-percent absorption assumption, the cost increase for institutions with a return on assets of 100 basis points varies from about 3.9 percent to 5.8 percent of annual after-tax earnings, respectively. These costs would be double under a 100-percent absorption scenario. Institutions with a return on assets of 50 basis points, or one-half of the median return on assets, would face double the cost increase as a share of earnings at each level of assessable deposits. Further, this scenario could cause institutions which would otherwise have had low earnings to begin incurring losses. The cost increase associated with the premium rate differential would increase the losses of institutions already experiencing losses. Prolonged periods of losses deplete institution capital and can eventually lead to failure. However, an institution’s earnings can vary dramatically over time. Therefore, it is also important to consider an institution’s likely earnings over the time horizon of the premium rate differential. Because the cost of the premium differential is also related to the share of assets financed with assessable deposits, SAIF members are likely to replace deposits with other funding sources, such as Federal Home Loan Bank advances. Therefore, some of the costs referred to above could be mitigated somewhat if an institution replaces deposits with other sources of funding. However, in the aggregate, the cushion provided by such substitution is limited because eventually SAIF’s premium rates would need to be increased in response to declines in the portion of SAIF’s assessment base available to pay FICO in order to continue paying the FICO bond interest. Although the impact of the premium rate differential will be more severe for institutions with low earnings and low capital, the impact should be considered over the duration of the premium rate differential. Some SAIF members are likely to fail in their business operations whether a premium disparity develops or not. However, institutions that are currently troubled could recover within a short period of time, since national, regional, and local economic fluctuations cause institutions to go through periods of earnings fluctuations in which they experience relatively low earnings for a number of years, followed by a subsequent recovery. The existence of a differential could make the climb back to recovery more difficult. For example, the state of California has experienced significant declines in real estate prices over the past few years. Approximately 26 percent of all thrift industry assets are held in California, and, in 1993, 78 of the 98 SAIF members in California had a return on assets of less than 100 basis points. It is possible that some of these institutions could ultimately fail with or without the introduction of a premium differential. However, many of these institutions could experience earnings growth if real estate values rebound and asset quality subsequently improves. The premium differential will reduce earnings for SAIF members. Also, the premium differential, as well as the expectation of a future differential, will likely reduce capital investments in SAIF-member institutions compared to the outcomes that otherwise would result without the disparity. Unfortunately, reliable statistical estimates do not exist to predict how capital investments in financial institutions will respond to changes in earnings. Furthermore, a number of other factors also affect capital investment in financial institutions, including the term structure of interest rates and the regulatory environment in which financial institutions operate. It should be noted, however, that the thrift industry as a whole is currently well-capitalized, with a median equity capital ratio in excess of 8 percent at September 30, 1994. The potential premium rate differential is likely to impact banks’ and thrifts’ costs and their ability to attract deposits and capital. While predicting the response of banks and thrifts to the lowering of premium rates for BIF members is subject to considerable uncertainties, it is likely that banks will take at least some advantage of their lower cost of insurance coverage to expand their deposit base and capital by offering incentives to customers. The likely reaction by thrifts would be to match bank actions to retain and compete for deposits. The severity of the effect of such actions on thrift earnings and capital is subject to the duration and size of the premium differential but will generally be more severe for thrifts already experiencing low earnings or losses and for thrifts that rely heavily on deposits for funding. Thrifts may also replace deposits with other nondeposit sources of funding in an effort to reduce their costs relative to banks, which would further decrease SAIF’s assessment base and could lead to a widening of the premium differential. Several policy options exist to prevent a premium rate differential between BIF and SAIF members from occurring or to reduce the size and duration of the projected differential. If a premium rate differential is prevented, many of the potential negative effects on the thrift industry and SAIF discussed in chapters 2 and 3 could be avoided. Options that reduce the differential would likely cause the potential effects on thrift institutions and SAIF to be less severe than if a higher differential develops. Some options also reduce or eliminate the risks associated with a thinly capitalized fund and a small assessment base. Aside from the option of taking no action at this time, most of the options in this chapter involve the shifting of at least some costs to either BIF members or the taxpayer. Table 4.1 presents most of the policy options that are discussed throughout this chapter. These options assume the continued servicing of the FICO interest obligation. At December 31, 1995, we project the present value of the total cost to increase SAIF’s reserves to their 1.25 percent designated ratio to insured deposits and to fund the FICO interest obligation, when discounted at 8.60 percent, to be $13.8 billion. When discounted at 7.55 percent, the total cost increases to $14.4 billion. Based on FDIC’s projections, SAIF would need additional capital of $6.1 billion to achieve its designated reserve ratio at the end of 1995. The present value of the total FICO interest obligation from 1996 through 2019 is approximately $7.7 billion using an 8.60-percent discount rate and $8.3 billion using a 7.55-percent discount rate. SAIF’s fund balance at December 31, 1995, is projected by FDIC to be $2.4 billion. Based on FDIC’s projections, this would represent a ratio of reserves to estimated insured deposits of 0.35 percent at year-end 1995. SAIF would need an additional $6.1 billion in capital at December 31, 1995, to reach its designated reserve ratio, for a total capital base of $8.5 billion. If no action is taken, and FDIC lowers BIF-member premiums after the Fund reaches its designated reserve ratio in 1995, several significant risks for SAIF’s long-term outlook exist which could result in the need for future use of appropriated funds. These risks are interrelated and could result in premium rates increasing to a level which cannot be sustained by SAIF members, thereby calling into question SAIF’s long-term viability and its ability to service its members’ long-term FICO obligation. A thinly capitalized SAIF leaves the Fund at risk that it does not have sufficient capital to withstand significant fluctuations in the assumptions of future failures used in FDIC’s projections, particularly over the next several years. As discussed in chapters 2 and 3, a premium rate differential carries the risks that SAIF members will have difficulty competing with BIF members and attracting capital, possibly leading to additional shrinkage in SAIF’s assessment base. This is particularly true if future servicing of the FICO interest obligation after SAIF’s capitalization is a factor considered by FDIC in setting SAIF’s future premium rates. According to FDIC’s projections, the annual FICO interest expense currently represents about 16 basis points in relation to the portion of SAIF’s assessment base available to pay FICO. FDIC is currently projecting an annual shrinkage of 2 percent in the portion of SAIF’s deposit base available to pay FICO bond interest, which will make the FICO obligation more expensive in relation to the assessment base. According to FDIC’s projections, the FICO obligation will require 19 basis points at the time of SAIF’s capitalization and increase to 23.5 basis points in the year 2012. However, as discussed in chapters 2 and 3, SAIF’s future levels of assessment base shrinkage is extremely uncertain and could be greater than projected. Greater than projected shrinkage in the portion of SAIF’s assessment base available to pay FICO would increase the risk that SAIF members would be unable to service the annual FICO interest obligation without FDIC further increasing premiums above SAIF’s currently projected rates. Several options exist to prevent a premium rate differential and its potentially adverse effects from occurring or to reduce the size and duration of the projected differential. The Congress could pass legislation to merge BIF and SAIF into one combined deposit insurance fund, thereby providing a broad assessment base and diversification of risk. Within a merger scenario, several options exist for handling the costs associated with SAIF’s capital needs and the fixed FICO obligation. Other options exist which involve a continuation of separate insurance funds for the banking and thrift industries. However, each option has different outcomes, and some options carry more risk and uncertainty than others. Arguments have been made that any option that involves the banking industry contributing to service the FICO interest obligation is unfair to the industry. These arguments contend that the FICO obligation was incurred during the thrift crisis of the 1980s and, as such, is an obligation of the thrift industry. However, there are also arguments that those thrift institutions that comprise today’s thrift industry still exist because they are healthy, well-managed institutions that avoided the mistakes made by many thrifts in the 1970s and 1980s that ultimately led to the thrift debacle. As such, they argue, they should be no more responsible for the FICO interest burden than the banking industry. The options discussed in the remainder of this chapter do not attempt to judge the merits of either side of this issue. Rather, they simply attempt to present how various approaches to dealing with the premium rate differential will impact banking and thrift institutions and eliminate or reduce the risks discussed throughout this report. An option available to the Congress is to pass legislation which would merge BIF and SAIF into one combined deposit insurance fund. A merger would provide a large assessment base and diversification of risk, thereby eliminating the current risks associated with a thinly capitalized SAIF. Within a merger scenario, several options exist for dealing with the FICO obligation and SAIF’s capitalization. The following scenarios assume a merger on January 1, 1996. The Congress could pass legislation to merge BIF and SAIF into a combined deposit insurance fund on January 1, 1996, with each fund bringing into the combined fund their current level of reserves. If BIF and SAIF are combined without first capitalizing SAIF, and all members of the combined fund continue to pay premiums at the current average annual rate of 23 to 24 basis points until the combined fund reaches the designated reserve ratio, the combined fund would be capitalized in mid-1996. This would be 1 year later than BIF’s current projected recapitalization in 1995 and 6 years earlier than SAIF’s currently projected capitalization in 2002. Once the combined fund is capitalized, premium rates for the combined fund members could be lowered and would average approximately 6 to 7 basis points annually. This rate would be sufficient to service the annual FICO interest obligation and would be about 2 basis points higher than the future premium rate of 4 to 5 basis points FDIC currently projects for BIF members once BIF attains its designated reserve ratio. Under this scenario, no premium rate differential would develop, and therefore, the risks associated with a rate differential would be eliminated. The risks associated with a small assessment base would also be eliminated since the FICO obligation would be spread over the combined base. BIF members would, in effect, provide most of the initial capital infusion and pay a portion of the FICO obligation. Assuming that the FICO obligation is spread proportionally between the BIF and SAIF assessment bases and that the bases grow at equal rates after the merger, the present value of the additional premiums BIF members would pay under this scenario would be approximately $11.2 billion. The Congress could pass legislation to merge BIF and SAIF into a combined deposit insurance fund but require that both BIF and SAIF be adequately capitalized prior to the merger. Under this scenario, FDIC could assess SAIF members a special assessment to bring SAIF’s reserves up to the designated reserve ratio before merging the two funds. SAIF’s reserves could be raised to a ratio of reserves to insured deposits of 1.25 percent by FDIC charging a one-time assessment of approximately 84 basis points on the Fund’s assessment base in 1995, prior to merging the funds. A merger under this scenario would allow BIF to recapitalize in 1995, as currently projected. BIF-member premiums could then be reduced from their current level on schedule with FDIC’s current projections. The new premium rates charged to the combined fund members would average approximately 6 to 7 basis points annually. These rates would be sufficient to service the annual FICO interest obligation and would be about 2 basis points higher than the future annual premium rates of 4 to 5 basis points currently projected for BIF members. Under this scenario, the risks associated with a premium differential and a thinly capitalized fund would be eliminated. Additionally, the risks associated with a small assessment base would be eliminated, since the FICO obligation would be spread over the combined base. SAIF members would provide the necessary infusion of capital, and BIF members would pay a share of the FICO obligation. Assuming equal growth rates among all fund members after the merger, the present value of the additional premiums BIF members would pay under this scenario would be approximately $5.9 billion. An 84 basis point special assessment to capitalize SAIF would pose some risks to the industry. Specifically, SAIF members and other institutions with SAIF-insured deposits would be forced to contribute $6.1 billion more to SAIF in 1995 than currently projected to bring sufficient capital into the combined fund. Clearly, this is a significant cost to these institutions. Even for profitable institutions, the special assessment could result in losses and a reduction in capital in the year of the assessment. Few institutions that are currently meeting capital requirements would not meet these requirements as a result of the special assessment. However, for some institutions with both low earnings or losses and low capital that are identified as troubled by the regulators, the special assessment could accelerate their failure. The impact of the special assessment on thrifts could be minimized by spreading the special assessment over several years. However, the risks to the thrift industry under this option are not as great as those associated with the premium rate differential indicated in FDIC’s current projections, assuming the prolonged duration of this differential to service the annual FICO interest obligation through 2019. This special assessment would be a one-time cost increase to SAIF members, after which their rates would decline significantly and would be the same as those charged to BIF members. Overall, the one-time assessment of 84 basis points, combined with a merger of the funds, would carry significantly less risk than the currently projected rate differential extended through the duration of the FICO interest obligation, since the cost to SAIF members would be less than the cost SAIF members would otherwise incur if they were required to capitalize SAIF and fund the entire FICO obligation. Additionally, a future premium rate differential would be eliminated. The Congress could also pass legislation to merge BIF and SAIF into a combined deposit insurance fund with all members contributing to capitalize the fund but require the former SAIF members to retain responsibility for servicing the annual FICO interest obligation. Under this scenario, BIF and SAIF are combined without first capitalizing SAIF. All members of the combined fund would continue to pay premiums at the current average annual rate of 23 to 24 basis points until the combined fund achieves a ratio of reserves to insured deposits of 1.25 percent in 1996. Premium rates would then decline for both former BIF and SAIF members from their current level; however, premium rates for the former SAIF members would only decline slightly if their rates are set at a level sufficient to pay the FICO obligation. Under this scenario, a premium rate differential would still develop after the combined fund is capitalized because former SAIF members would still be responsible for servicing the FICO interest obligation. BIF members would, in effect, provide a substantial portion of the capital infusion needed to capitalize the combined fund and the cushion against exposure to future financial institution failures. BIF members would pay approximately $5.8 billion more in premiums to cover the capital infusion. It is also possible that the combined fund would incur higher than projected costs in the future if the former SAIF members are negatively impacted by the premium differential that would still develop under this scenario. If this approach were employed, the risks associated with a small assessment base would not change, since the former SAIF members would still retain responsibility for the FICO obligation. However, the risks associated with a thinly capitalized fund would be eliminated, since the combined fund would be capitalized and better able to withstand insurance losses than an undercapitalized SAIF. The risks associated with the premium differential would probably not change as continued servicing of the FICO obligation would continue to result in a significant premium differential. Several options exist for maintaining BIF and SAIF as separate funds, while avoiding the immediate use of appropriated funds. The Congress could require that BIF members fund a portion of the FICO obligation, thereby reducing the size and the duration of the projected premium rate differential. FDIC could reduce SAIF’s premiums before the Fund capitalizes, thereby extending the time frame in which SAIF becomes fully capitalized but reducing the size of the premium rate differential currently projected through the year 2002. The Congress could also make all SAIF resources available to service the FICO obligation. As discussed previously, servicing the interest on the FICO bonds represents a substantial cost for the portion of SAIF’s assessment base responsible for paying FICO. This creates the potential for a significant premium rate differential even with a fully capitalized insurance fund. To eliminate this situation and place thrifts on an equal competitive footing with banks, the Congress could pass legislation requiring BIF members to share the cost of servicing the FICO obligation with SAIF members beginning in 1996. Under this option, if BIF and SAIF members shared the FICO obligation proportionally based on their projected 1995 assessment bases, BIF members would fund 77 percent of the FICO obligation and SAIF members would fund the remaining 23 percent, eliminating the risks associated with a small assessment base servicing the FICO obligation. BIF would still attain its designated reserve ratio in 1995 as currently projected; however, SAIF would capitalize in the year 2000, 2 years earlier than currently projected by FDIC. After capitalization, SAIF’s projected premium rates could be lowered to a level comparable with BIF’s, thereby significantly reducing the risks associated with the premium differential. Under this scenario, a significant premium rate differential would still exist until the year 2000, when SAIF capitalizes. The present value of the additional premium cost to BIF members under this scenario would be approximately $5.9 billion. SAIF members would still be required to capitalize SAIF and would fund their proportionate share of the FICO obligation. The present value of SAIF members’ cost under this scenario would be approximately $7.9 billion. Given the relative capital positions of the two insurance funds and the risks associated with a prolonged period of a significant premium rate differential, another option would be for the Congress to pass legislation requiring BIF to raise sufficient funds to pay the FICO obligation. If FDIC maintained BIF’s premium rate at the current annual average of 23 basis points through early 1997, sufficient funds would be raised to pay the FICO obligation on a present value basis, assuming a discount rate of 8.6 percent. BIF members would pay approximately $7.7 billion more in premiums than currently projected by FDIC. Under this scenario, BIF premiums would not be reduced until 1997. Additionally, SAIF would reach its designated reserve ratio in 1999, 3 years earlier than currently projected by FDIC. With SAIF’s earlier capitalization, the risks associated with a thinly capitalized fund would be reduced. After SAIF’s capitalization, its premium rates would be comparable to BIF’s. Because SAIF’s members would, in effect, be relieved from the FICO interest obligation, the risks associated with a small assessment base paying the fixed FICO interest obligation would be eliminated. Under current law, FDIC has the option of lowering SAIF premiums prior to SAIF’s capitalization. FDIC’s Board of Directors has the authority to lower SAIF premiums to an average annual rate of 18 basis points until January 1, 1998, after which the average rate must remain at 23 basis points or higher until the Fund is capitalized. Reducing the average annual rate to 18 basis points is presently projected to delay SAIF’s capitalization for 2 years, until 2004. Although this option would slightly reduce the size of the projected premium rate differential, it does little to address the risks associated with a prolonged premium rate differential. This option would also increase the risks associated with a thinly capitalized fund, since SAIF’s capitalization would be delayed until the year 2004 and remain vulnerable to any increases in thrift failures. As discussed earlier, SAIF’s inability to use assessments collected from Oakar and Sasser institutions to help fund FICO interest payments is a significant limitation on its ability to service the industry’s FICO obligation. Currently, a significant and growing portion of SAIF’s assessment base is not available for this purpose. The Congress could modify current law to specify that all SAIF assessments, including assessments paid by Oakar and Sasser institutions, are available to service the FICO obligation. This action could help SAIF meet future FICO payments without a need to maintain premiums at the current rate beyond the date SAIF attains its designated reserve ratio. However, the risks associated with a thinly capitalized fund over the next several years would not be eliminated. Additionally, the risks associated with the projected premium rate differential would also not be eliminated, as the annual FICO interest obligation would still represent a significant additional cost in SAIF’s premium rates that would not be present in BIF’s premium rates. As discussed in chapter 2, if that portion of SAIF’s assessment base available to pay the FICO obligation declines beyond FDIC’s current projections, it is possible that SAIF would need to charge higher-than-projected premium rates in the years following its capitalization. These higher premium rates would increase the size of the premium differential and the potential for negative effects on SAIF-insured institutions and SAIF. If this were the only action taken, a premium rate differential would not be avoided or reduced. Consequently, the potential negative effects for SAIF-insured institutions and SAIF discussed in chapters 2 and 3 would not be avoided or mitigated. The options discussed previously to deal with the funding concerns for SAIF and the thrift industry’s long-term FICO obligation require significant cost to be borne by banks, thrifts, or a combination of both industries. Alternatively, other options are available that shift this burden to the Treasury and, ultimately, the taxpayers. The Congress could provide SAIF with new funding as a source of capital and as a means to pay the FICO obligation. Another option is to make the funds previously appropriated or the funds authorized, but not appropriated, available for these purposes. Each of these funding options would require legislation and would be subject to budget scorekeeping procedures. The Congress could appropriate funds to SAIF as a source of capital and as a means to pay the FICO obligation. As discussed earlier, SAIF would require approximately $14.4 billion at the end of 1995 in order to reach its reserve ratio and fund its future FICO obligation, using a discount rate of 7.55 percent. The Resolution Trust Corporation Refinancing, Restructuring, and Improvement Act of 1991 (Public Law 102-233) provided RTC with $25 billion in December 1991 to fund resolution activity. However, these funds were only available for obligation until April 1, 1992. On that date, RTC returned $18.3 billion of unobligated funds to the Treasury. In December 1993, the RTC Completion Act removed the April 1, 1992, deadline, thus making the $18.3 billion available to RTC for completion of its resolution activities. The RTC Completion Act also makes any unused RTC funding available during the 2-year period beginning on the date of its termination to SAIF for insurance losses. As of December 31, 1993, RTC’s audited financial statements showed that RTC could have $13 billion in unused loss funds after resolving all institutions for which it is responsible. SAIF’s use of RTC funding is subject to significant restrictions. Before these funds can be used, FDIC must certify to the Congress, among other things, that (1) SAIF-insured institutions are unable to pay premiums sufficient to cover insurance losses without adversely affecting their ability to raise and maintain capital or to maintain the assessment base, and (2) an increase in premiums could reasonably be expected to result in greater losses to the government. The Congress could pass legislation removing the restrictions on SAIF’s use of RTC funding and make the funds available to capitalize SAIF and to pay the FICO obligation. Based on the estimates presented in RTC’s December 31, 1993, audited financial statements, it appears that significant funding may be available to both capitalize SAIF and fund a substantial portion of the FICO obligation. If this funding were made available at the end of 1995, SAIF would need approximately $6.1 billion to reach its designated reserve ratio, as well as $8.3 billion on a present value basis to cover the future FICO obligation. Because some uncertainty exists regarding RTC’s final loss funding needs, the Congress could withhold a portion of the RTC funding for possible future use by RTC until it is either used by RTC, or it becomes fairly certain that RTC will not need the funding. If the RTC funding were used as a capital infusion and as a mechanism for funding a substantial portion of the FICO obligation, the premium differential would be significantly reduced. Therefore, the risk of negative effects on SAIF members and SAIF resulting from the differential would also be substantially reduced. The capital infusion would provide SAIF with a cushion against future losses, and the risks associated with a thinly capitalized fund would be eliminated. The FDI Act, as amended by FIRREA and by the RTC Refinancing, Restructuring, and Improvement Act of 1991, authorized Treasury to provide funding to SAIF each fiscal year from 1993 to 2000 to the extent that the SAIF member assessments deposited in the Fund did not total $2 billion a year. Additionally, Treasury was authorized to make annual payments necessary to ensure that SAIF had a specific net worth, ranging from zero during fiscal year 1992 to $8.8 billion during fiscal year 2000. The cumulative amounts of these payments were not to exceed $16 billion. However, while the FDI Act, as amended, authorized the appropriation of funds to the Secretary of the Treasury, such funds were not actually appropriated. These funding provisions were later amended by the RTC Completion Act. That act authorized up to $8 billion for SAIF’s insurance losses incurred in fiscal years 1994 through 1998 and placed restrictions on the availability of these funds similar to the restrictions on the availability of RTC funding. The Congress could pass legislation removing the restrictions on this funding source and appropriate the funds to aid in capitalizing SAIF and funding the FICO obligation. The $8 billion authorized would not be sufficient to both capitalize SAIF and completely fund the FICO obligation. However, it would be sufficient to capitalize SAIF and fund about one-fourth of the FICO obligation. If this funding were authorized and appropriated for these purposes, SAIF would be capitalized when the funds are received. Providing this funding to SAIF would result in SAIF’s capitalization, and would have the overall effect of a capital infusion. SAIF would also be relieved of a significant portion of the future FICO obligation. Under this approach, the premium differential after capitalization would be reduced. Alternatively, another option using these funds would be to fund the FICO obligation and let SAIF members continue to fund the cost of capitalizing SAIF as well as paying for the small portion of the FICO obligation not covered by these funds. Under this option, SAIF members would continue to pay higher premiums than their BIF counterparts for 4 years, and the Fund would be capitalized in 1999. Some uncertainties are associated with these options, since a premium differential would exist, although its size and duration would be subject to how these funds would be applied. However, the risks associated with the differential would be significantly reduced as a result of reducing either the size or duration of the differential.
Pursuant to a congressional request, GAO reviewed the Federal Deposit Insurance Corporation's (FDIC) proposed reduction of bank insurance premiums once the Bank Insurance Fund (BIF) is recapitalized, focusing on: (1) the likelihood, potential size, and timing of a premium rate differential between banks and thrifts; (2) the possible effects of the premium rate differential on the two industries; (3) the potential adverse effects on the Savings Association Insurance Fund (SAIF); and (4) various policy options to avoid a premium rate differential between BIF and SAIF members. GAO found that: (1) BIF is expected to be fully capitalized during 1995 and FDIC has proposed reducing bank premiums as early as September 1995; (2) SAIF will not be fully capitalized for another 7 years because SAIF premiums are also used to pay the Financing Corporation's (FICO) bond interest; (3) a rate differential of about 19 basis points could exist for up to 24 years; (4) SAIF capitalization may be delayed or thrift insurance premiums may have to be increased if thrift failures increase and the deposit base for FICO payments continues to decline; (5) the SAIF deposit base has declined by 25 percent since 1989 and the portion of the deposit base available for FICO payments has declined by 48 percent; (6) there is no reliable data to predict banks' and thrifts' responses to a premium rate differential; (7) banks could pass some of the savings from reduced premiums to customers by increasing deposit interest rates and improving customer services, while thrifts would incur increased costs of up to 10 percent to remain competitive with banks; and (8) policy options to mitigate rate differential problems include taking no action, merging BIF and SAIF, requiring BIF and SAIF members to share FICO bond interest costs proportionally, using BIF premiums or all SAIF resources to pay FICO bond interest, and using appropriated funds to capitalize SAIF or fund FICO bond interest.
This section provides brief descriptions of the financial services industry and its component sectors, the changing demographic characteristics of the United States, and diversity management. The financial services industry plays a key role in the U.S. economy by, among other things, providing vehicles, such as insured deposits, providing credit to individuals and businesses, and providing protection against certain financial risks. We defined the financial services industry to include the following sectors: depository credit institutions, which is the largest sector, include commercial banks, thrifts (savings and loan associations and savings banks), and credit unions; holdings and trusts, which include investment trusts, investment companies, and holding companies; nondepository credit institutions, which extend credit in the form of loans, but are not engaged in deposit banking, include federally sponsored credit agencies, personal credit institutions, and mortgage bankers and brokers; the securities industry, which is made up of a variety of firms and organizations (e.g., broker-dealers) that bring together buyers and sellers of securities and commodities, manage investments, and offer financial advice; and the insurance industry, including carriers and insurance agents, which provides protection against financial risks to policyholders in exchange for the payment of premiums. Additionally, the financial services industry is a major source of employment in the United States. The financial services firms we reviewed for this study, which have 100 or more staff, employed nearly 3 million people in 2004, according to the EEO-1 data. According to the U.S. Bureau of Labor Statistics, employment growth in management and professional positions in the financial services industry was expected to grow at a rate of 1.2 percent annually through 2012. According to the U.S. Census Bureau, the U.S. population is becoming more diverse by race and ethnicity. In 2001, Census projected that the non-Hispanic, white share of the U.S. population would fall from 75.7 percent in 1990 to 52.5 percent in 2050, with a similar increase from the minority population during the same period. Census further projected that the largest increases would be in the Hispanic and Asian populations. According to the Census Bureau’s 2004 American Community Survey results, Hispanics are now the second largest racial/ethic group after whites. The rapid growth of minorities in the Unites States may also influence its economic activities. For example, according to Census, the number of firms owned by minorities and women continues to grow faster than the number of other firms. In particular, a recent Census report based on data from the 2002 Economic Census stated that, between 1997 and 2002, Hispanics in the United States opened new businesses at a rate three times faster than the national average. As we stated in a 2005 report, the composition of the U.S. workforce has become increasingly diverse, and many organizations are implementing diversity management initiatives. Diversity management is a process intended to create and maintain a positive work environment that values individuals’ similarities and differences, so that all can reach their potential and maximize their contributions to an organization’s strategic goals and objectives. On the basis of a literature review and discussions with experts, we identified nine leading diversity management principles: (1) top leadership commitment, (2) diversity as part of an organization’s strategic plan, (3) diversity linked to performance, (4) measurement, (5) accountability, (6) succession planning, (7) recruitment, (8) employee involvement, and (9) diversity training. EEO-1 data indicate that overall diversity among officials and managers within the financial services industry did not change substantially from 1993 through 2004, but that changes by racial/ethnic group varied. The EEO-1 data also show that certain financial sectors, such as depositories, including commercial banks, are somewhat more diverse at the management level than others, including securities firms. Additionally, EEO-1 data do not show material differences in management-level diversity based on the size of individual firms within the financial services industry. Figure 1 shows the representation of minorities and whites at the management level within the financial services industry in 1993, 1998, 2000, and 2004 from EEO-1 data. Management-level representation by minorities increased from 11.1 percent to 15.5 percent during the period, while representation by whites declined correspondingly from 88.9 percent to 84.5 percent. Management-level representation by white men declined from 52.2 percent to 47.2 percent during the period while the percentage of management positions held by white women was largely unchanged at slightly more than one-third. Existing EEO-1 data may actually overstate representation levels for minorities and white women in the most senior-level positions because the “officials and managers” category includes lower- and mid-level management positions that may have higher representations of minorities and white women. According to an EEOC official we spoke with, examples for “officials and managers” would range from the Chief Executive Officer from a major investment bank to an Assistant Branch Manager of a small regional bank. A revised EEO-1 form for employers that becomes effective with the 2007 reporting year divides the category of “officials and managers” into two hierarchical sub-categories based on responsibility and influence within the organization: “executive/senior level officials and managers” and “first/mid-level officials.” According to a trade association that commented on the revised EEO-1 form, collecting information about officials and managers in this manner will enable EEO-1 to more accurately report on the discriminatory artificial barriers (the “glass ceiling”) that hinder the advancement of minorities and white women to more senior-level positions. Figure 2 provides EEO-1 data for individual minority groups and illustrates their trends in representation at the management level, which varied by group. African-American representation increased from 5.6 percent in 1993 to 6.8 percent in 2000 but declined to 6.6 percent in 2004. Representation by Hispanics and Asians also increased, with both groups representing 4 percent or more of industry officers and managers by 2004. Representation by American Indians remained well under 1 percent of all management- level positions. EEO-1 data show that the depository and nondepository credit sectors, as well as the insurance sector, were somewhat more diverse in specific categories at the management level than the securities and holdings and trust sectors (see fig. 3). For example, in 2004, the percentage of management-level positions held by minorities ranged from a high of 19.9 percent for nondepository credit institutions (e.g., mortgage bankers and brokers) to a low of 12.4 percent for holdings and trusts (e.g., investment companies). The share of positions held by white women varied from a high of 40.8 percent in the insurance sector to a low of 27.4 percent among securities firms. The percentage of white men in management-level positions ranged from a high of 57.5 percent in the securities sector to a low of 44.0 percent in both the depository (e.g., commercial banks) and nondepository credit sectors. Consistent with the EEOC data, a 2005 SIA study we reviewed found limited diversity among key positions in the securities sector. EEO-1 data also show that the representation of minorities and whites at the management level in financial services firms generally does not vary by firm size (see fig. 4). Specifically, we did not find a material difference in the diversity of those in management-level positions among firms with 100 to 249 employees, 250 to 999 employees, and more than 1,000 employees. There were some variations across financial sectors by size. However, we note that SIA’s 2005 study of securities firms did find variation in diversity by firm size for a variety of positions within the securities sector. Officials from financial services firms and industry trade associations we contacted stated that the rapid growth of minorities as a percentage of the overall U.S. population and increased global competition have convinced their organizations that workforce diversity is a critical business strategy. Financial firm officials we spoke with said that their top leadership was committed to implementing a variety of workforce diversity programs to help enable their organizations to take advantage of the full range of available talent to fill critical positions and to maintain their firms’ competitive position. However, officials from financial services firms and trade associations also described the challenges they faced in implementing these initiatives, such as ongoing difficulties in recruiting and retaining minority candidates and in gaining commitment from employees to support diversity initiatives, especially at the middle management level. Over the past decade, the financial services firms we contacted have implemented a variety of initiatives to increase workforce diversity, including programs designed to recruit and retain minority and women candidates to fill key positions. Some bank officials said that they had developed scholarship and internship programs to encourage minority high school and college students to consider careers in banking with the goal of increasing the diversity of future applicant pools. Some firms have established formal relationships with colleges and Masters of Business Administration (MBA) programs to recruit minority students from these institutions. Some firms and trade organizations have also developed partnerships with groups that represent minority professionals, such as the National Black MBA Association and the National Society of Hispanic MBAs, as well as with local communities to recruit candidates, using events such as conferences and career fairs. Officials from other firms said that the goal of partnerships was to build long-term relationships with professional associations and communities and to increase the visibility of financial services firms among potential employees. Officials from financial services firms also said that they had developed programs to foster the retention and professional growth of minority and women employees. Specifically, these firms have encouraged the establishment of employee networks. For example, a commercial bank official told us that, since 2003, the company had established 22 different employee networks that enabled employees from various backgrounds to meet each other, share ideas, and create informal mentoring opportunities. established mentoring programs. For example, an official from another commercial bank told us that the company had a Web-based program that allowed employees of all backgrounds to connect with one another and to find potential mentors. instituted diversity training programs. Officials from financial services firms said that these training programs increase employees’ sensitivity to and awareness of workforce diversity issues and helped staff deal effectively with colleagues from different backgrounds. One commercial bank we contacted requires its managers to take a 3- to 5-day training course on dealing with a diverse workforce. The training stressed the concept of workforce diversity and provided a forum in which employees spoke about their differences through role-playing modules. The bank has also developed a diversity tool kit and certification program as part of the training. established leadership and career development programs. For example, an official from an investment bank told us that the head of the firm would meet with every minority and woman senior executive to discuss his or her career development. For lower-level individuals, the investment bank official said that the organization had created a career development committee to serve as a forum for discussions on career advancement. Officials from some financial services firms we contacted as well as industry studies noted that that financial services firms’ senior managers were involved in diversity initiatives. For example, SIA’s 2005 study on workforce diversity in the securities industry found that almost half of the 48 securities firms surveyed had full-time senior managers dedicated to diversity initiatives. According to a report from an executive membership organization, an investment bank had developed a program that involved lower-level employees from diverse backgrounds, along with their senior managers, to develop diversity initiatives. Moreover, officials from a few commercial banks that we interviewed said that the banks had established diversity “councils” of senior leaders to set the vision, strategy, and direction of diversity initiatives. The 2005 SIA study and a few of the firm officials we spoke with also suggested that some companies have instituted programs that link managers’ compensation with progress made toward promoting workforce diversity. Officials from one investment bank said that managers of each business unit reported directly to the company’s Chief Executive Officer who determined their bonuses in part based on the unit’s progress in hiring, promoting, and retaining minority and women employees. According to some officials from financial services firms, their firms have also developed performance indicators to measure progress in achieving diversity goals. These indicators include workforce representation, turnover, promotion of minority and women employees, and internal employee satisfaction survey responses. An official from a commercial bank said that the company monitored the number of job openings, the number of minority and women candidates who applied for each position, the number of such candidates who interviewed for open positions, and the number hired. In addition, a few officials from financial services firms told us that they had developed additional indicators such as promotion rates for minorities and whites and compensation equity across ranks for minorities and whites. Officials from several financial services firms stated that measuring the results of diversity efforts over time was critical to the credibility of the initiatives and to justifying the investments in the resources such initiatives demanded. Financial services trade organizations from the securities, commercial banking, and insurance sectors that we contacted have been involved in promoting workforce diversity. The following are some examples: In 1996 SIA formed a “diversity committee” of senior-level executives from the securities industry to assist SIA’s member firms in developing their diversity initiatives and in their efforts to market to diverse customers. This committee has begun a number of initiatives, such as developing diversity management tool kits, conducting industry demographic and diversity management research, and holding conferences. SIA’s diversity tool kit provides step-by-step guidelines on establishing diversity initiatives, including identifying ways to recruit and retain diverse candidates, overcoming challenges, measuring the results of diversity initiatives, and creating strategies for transforming a firm’s culture. In addition, since 1999 SIA has been conducting an industry-wide diversity survey every 2 years to help its members measure their progress toward increasing workforce diversity. The survey includes aggregated data that measure the number of minority and women employees in the securities industry at various job levels and a profile of securities industry activities designed to increase workforce diversity. In 2005, SIA held its first diversity and human resources conference, which was designed so that human resources and senior-level managers could share best practices and current strategies and trends in human resource management and diversity. The American Bankers Association collaborated with the Department of Labor’s Office of Federal Contract Compliance Programs in 1997 to identify key issues that banks should consider in recruiting and hiring employees in order to create fair and equal employment opportunities. The issues include managing the application process and selecting candidates in a way that ensures the equal and consistent treatment of all job applications. The Independent Insurance Agents and Brokers of America (IIABA) established the IIABA Diversity Task Force in 2002 to promote diversity within the insurance agent community. The task force is charged with fostering a profitable independent agency force that reflects, represents, and capitalizes on the opportunities of the diverse U.S. population. Among its activities, the diversity task force is developing a database of minority insurance agents and minority-owned insurance agencies as a way to help insurance carriers seeking to expand their business with a diverse agent base and potentially reach out to urban areas and underserved markets. According to IIABA, the task force has just completed a tool kit for IIABA state associations, volunteer leadership, and staff. This step-by-step guide advises state associations on how to recruit and retain a diverse membership through their governance, products, service offerings, and association activities. In addition, IIABA participates in a program to educate high school and community college students on careers in insurance, financial services, and risk management and encourages students to pursue careers in the insurance industry. The Mortgage Bankers Association (Association) has established plans and programs to increase the diversity of its own leadership, as well as to promote diversity within the Association’s member firms in 2005. The Association plans to increase diversity within its leadership ranks by 30 percent by September 2007 and has asked member firms to recommend potential candidates. To help member firms expand the pool of qualified diverse employees in the real estate finance industry, the Association has instituted a scholarship program called “Path to Diversity,” which awards between 20 and 30 scholarships per year to minority employees and interns from member firms. Recipients can take courses at CampusMBA, the Association’s training center for real estate finance, in order to further their professional growth and development in the mortgage industry. Although financial services firms and trade organizations we contacted have launched diversity initiatives, they cited a variety of challenges that may have limited their success. First, the officials said that the industry faces ongoing challenges in recruiting minority and women candidates even though firms may have established scholarship and internship programs and partnered with professional organizations. According to officials responsible for promoting workforce diversity from several firms, the industry lacks a critical mass of minority and women employees, especially at the senior levels, to serve as role models to attract other minorities to the industry. Officials from an investment bank and a commercial bank also told us that the supply (or “pipeline”) of minority and women candidates in line for senior or management-level positions was limited in some geographic areas and that recruiting a diverse talent pool takes time and effort. Officials from an investment bank said that their firm typically required a high degree of specialization in finance for key positions. An official from another investment bank noted that minority candidates with these skills were very much in demand and usually receive multiple job offers. Available data on minorities enrolled in and graduated from MBA programs provide some support for the contention that there is a limited external pool that could feed the “pipeline” for some management-level positions within the financial services industry, as well as other industries. According to the Department of Labor, many top executives from all industries, including the financial services industry, have a bachelor’s degree or higher in business administration. MBA degrees are also typically required for many management development programs, according to an official from a commercial bank and an official from a foundation that provides scholarships to minority MBA students. We obtained data from the Association to Advance Collegiate Schools of Business (AACSB) on the percentage of students enrolled in MBA degree programs in accredited AACSB schools in the United States from year 2000 to 2004. As shown in table 1, minorities accounted for 19 percent of all students enrolled in accredited MBA programs in 2000 and 23 percent in 2004. African-American and Hispanic enrollment in MBA programs was generally stable during that period, and both groups accounted for 6 and 5 percent of enrollment, respectively, in 2004. Asian representation increased from 9 percent in 2000 to 11 percent in 2004. Other data indicate that MBA degrees awarded may be lower than the MBA enrollment data reported by AACSB. For example, Graduate Management Admission Council® (GMAC®) data indicate that minorities in its survey sample accounted for 16 percent of MBA graduates in 2004 versus 23 percent minority enrollment during the same year as reported by AACSB. Because financial services firms compete with one another, as well as with companies from other industries to recruit minority MBA graduates, their capacity to increase diversity at the management level may be limited. Other evidence suggests that the financial services industry may not be fully leveraging its “internal” pipeline of minority and women employees for management-level positions. As shown in figure 5, there are job categories within the financial services industry that generally have more overall workforce diversity than the “officials and managers” category, particularly among minorities. For example, minorities held 22 percent of professional positions as compared with 15 percent of “officials and managers” positions in 2004. See appendix II for more information on the specific number of employees within other job categories, as well as more specific breakouts of various minority groups by sector. According to a recent EEOC report, which used 2003 EEO-1 data, the professional category represented a likely pipeline of internal candidates for management-level positions within the industry. Compared with white males, the EEOC study found that the chances of minorities and women (white and minority combined) advancing from the professional category into management-level positions were low. The study also found that the chances of Asians (women and men) advancing into management-level positions from the professional category were particularly low. Although EEOC said there are limitations to its analysis, the agency suggests that the findings could be used as a preliminary screening device designed to detect potential disparities in management-level opportunities for minorities and women. Following are descriptions of the job categories in EEO-1 data from EEOC: (1) “officials and managers”: occupations requiring administrative and management personnel who set broad policies, exercise overall responsibility for execution of these policies, and direct individual departments or special phases of a firm’s operations; (2) “professionals”: occupations requiring either college graduation or experience of such kind and amount as to provide a comparable background; (3) “technicians”: occupations requiring a combination of basic scientific knowledge and manual skill that can be obtained through 2 years of post high school education; (4) “sales workers”: occupations engaging wholly or primarily in direct selling; (5) “office and clerical”: includes all clerical-type work regardless of level of difficulty, where the activities are predominantly nonmanual; and (6) the category “other” includes craft workers, operatives, laborers, and service workers. Many officials from financial services firms, industry trade groups, and associations that represent minority professionals agreed that retaining minority and women employees represented one of the biggest challenges to promoting workforce diversity. The officials said that one reason minority and women employees may leave their positions after a short period is that the industry, as described previously, lacks a critical mass of minority women and men, particularly in senior-level positions, to serve as role models. Without a critical mass, the officials said that minority or women employees may lack the personal connections and access to informal networks that are often necessary to navigate an organization’s culture and advance their careers. For example, an official from a commercial bank we contacted said he learned from staff interviews that African-Americans believed that they were not considered for promotion as often as others partly because they were excluded from informal employee networks. While firms may have instituted programs to involve managers in diversity initiatives, some industry officials said that achieving commitment, or “buy-in,” can still pose challenges. Other officials said that achieving the commitment of middle managers is particularly important because these managers are often responsible for implementing key aspects of the diversity initiatives, as well as explaining them to their staffs. However, the officials said that middle managers may be focused on other aspects of their responsibilities, such as meeting financial performance targets, rather than the importance of implementing the organization’s diversity initiatives. Additionally, the officials said that implementing diversity initiatives represents a considerable cultural and organizational change for many middle managers and employees at all levels. An official from an investment bank told us that the bank has been reaching out to middle managers who oversee minority and woman employees by, for example, instituting an “inclusive manager program.” According to the official, the program helps managers examine subtle inequities and different managerial and working styles that may affect their relationships with minority and women employees. Studies and reports, as well as interviews we conducted, suggest that minority- and women-owned businesses have faced challenges obtaining capital (primarily bank credit) in conventional financial markets for several business reasons, such as the concentration of these businesses in the service sector and relative lack of a credit history. Other studies suggest that lenders may discriminate, particularly against minority-owned businesses. However, assessing lending discrimination against minority- owned businesses may be complicated by limited data availability. Available research also suggests that factors, including business characteristics, introduce challenges for both minority- and women-owned businesses in obtaining access to equity capital. However, some financial institutions, primarily commercial banks, have recently developed strategies to market their loan products to minority- and women-owned businesses or are offering technical assistance to them. Reports issued by the MBDA, SBA, and academic researchers, as well as interviews we conducted with commercial banks, minority-owned banks, and trade groups representing minority- and women-owned businesses suggest that minority- and women-owned businesses may face challenges in obtaining commercial bank credit. The reports and interviews typically cite several business characteristics shared by both minority-owned firms and, in most cases, women-owned firms that may compromise their ability to obtain bank credit as follows: First, recent MBDA reports found that many minority-owned businesses in the United States are concentrated in retail and service industries, which have relatively low average annual capital expenditures for equipment. Low capital expenditures are an attractive feature for start-up businesses, but with limited assets to pledge as collateral against loans, these businesses often have difficulty obtaining financing. According to the U.S. Census Bureau’s 2002 Survey of Business Owners, approximately 61 percent of minority-owned businesses and approximately 55 percent of women-owned firms operate in the service sectors as compared to about 52 percent of all U.S. firms. Second, the Census Bureau’s 2002 Survey of Business Owners indicated that many minority- and women-owned businesses were start-ups or relatively new and, therefore, might not have a history of sound financial performance to present when applying for credit. Some officials from a private research organization and a trade group official we contacted said that banks are reluctant to lend to start-up businesses because of the costs involved in assessing the prospects for such businesses and in monitoring their performance over time. Third, the relatively small size and lack of technical experience of some minority-owned businesses may affect their ability to obtain bank credit. For example, an MDBA report stated that minority businesses often need extensive mentoring and technical assistance such as help developing business plans in addition to financing. Several other studies suggest that discrimination may also be a reason that minority-owned businesses face challenges obtaining commercial loans. For example, a 2005 SBA report on the small business economy summarized previous studies by researchers reporting on lending discrimination. These previous studies found that minority-owned businesses had a higher probability of having their loans denied and would likely pay higher interest rates than white-owned businesses, even after controlling for differences in creditworthiness and other factors. For example, a study found that given comparable loan applications—by African-American and Hispanic-owned firms and white-owned firms—the applications by the African-American and Hispanic-owned firms were more likely to be denied. Another study found that minorities had higher denial rates even after controlling for personal net worth and homeownership. The SBA report concludes that lending discrimination is likely to discourage would-be minority entrepreneurs and reduce the longevity of minority-owned businesses. Another 2005 report issued by SBA also found that minority-owned businesses face some restrictions in access to credit. This study investigated possible restricted access to credit for minority- and women- owned businesses by focusing on two types of credit—“relationship loans” (lines of credit) and “transaction loans” (commercial mortgages, equipment loans, and other loans) from commercial banks and nonbanks, such as finance companies. The researchers found that minority business owners were more likely to have transaction loans from nonbanks and less likely to have bank loans of any kind. The researchers also found that African-American and Hispanic business owners have a greater probability of having either type of loan denied than white male owners. The researchers did not find evidence suggesting that women or Asian business owners faced loan denial probabilities different from those of firms led by white, male-owned firms. Although studies have found potential lender discrimination against minority-owned businesses, assessing such discrimination may be complicated by limited data availability. The Federal Reserve’s Regulation B, which implements the Equal Credit Opportunity Act, prohibits financial institutions from requiring information on race and gender from applicants for nonmortgage credit products. Although the regulation was implemented to prevent the information from being used to discriminate against underserved groups, some federal financial regulators have stated that removing the prohibition would allow them to better monitor and enforce laws prohibiting discrimination in lending. We note that under the Home Mortgage Disclosure Act (HMDA), lenders are required to collect and report data on racial and gender characteristics of applicants for mortgage loans. Researchers have used HMDA data to assess potential mortgage lending discrimination by financial institutions. In contrast, the studies we reviewed on lending discrimination against minority and small business tend to rely on surveys of small businesses by the Federal Reserve or the Census rather than on lending data obtained directly from financial institutions. According to available research, many minority- and women-owned businesses face challenges in raising equity capital—such as, from venture capital firms. For example, one study estimated that only $2 billion of the $95 billion available in the private equity market in 1999 was managed by companies that focused on supplying capital to entrepreneurs from traditionally underserved markets, such as minority-owned businesses. Moreover, according to a study by a private research organization, in 2003 only 4 percent of women-owned businesses with $1 million or more in revenue had been funded through private equity capital as compared with 11 percent of male-owned businesses with revenues of $1 million or more. According to studies and reports by private research organizations, some of the same types of business characteristics that may affect the ability of many minority- and women-owned businesses to obtain bank credit also limit their capacity to raise equity capital. For example, industry reports and industry representatives that we contacted state that venture capitalists place a high priority on the management and technical skills companies; whereas some minority-owned businesses may lack a proven track record of such expertise. Although venture capital firms may not have traditionally invested in minority-owned businesses, a recent study suggests that firms that do focus on such entities can earn rates of return comparable to those earned on mainstream private equity investments. This study, funded by a private foundation, found that venture capital funds that specialize in investing in minority-owned businesses were relatively profitable compared with a private equity performance index. According to the study, the venture capital funds that specialized in minority-owned businesses invested in a more diverse portfolio of businesses than the typical venture capital fund, which typically focuses on high-tech companies. The study found that investing in broad portfolios helped mitigate the losses associated with the downturn in the high-tech sector for firms that focused on minority-owned businesses. While minority- and women-owned businesses may have traditionally faced challenges in obtaining capital, as noted earlier, Census data indicate that such businesses are forming rapidly. Officials from some financial institutions we contacted, primarily large commercial banks, told us that they are reaching out to minority- and women-owned businesses. Some commercial banks are marketing their financial products to minority- and women-owned businesses by, for example, printing financial services brochures in various languages and assigning senior executives with diverse backgrounds to serve as the spokespersons for the institutions efforts to reach out to targeted groups (e.g., a bank may designate an Asian executive as the point person for Asian communities). However, officials at a bank and a trade organization told us that the loan products marketed to minority- and women-owned businesses did not differ from those marketed to other businesses and that underwriting standards had not changed. Bank officials also said that their companies had established partnerships with trade and community organizations for minorities and women to reach out to their businesses. Partnering allows the banks to locate minority- and women-owned businesses and gather information about specific groups of business owners. Bank officials said that such partnerships had been an effective means of increasing their business with these target groups. Finally, officials from some banks said that they educate potential business clients by providing technical assistance through financial workshops and seminars on various issues such as developing business plans and obtaining commercial bank loans. Other bank officials said that their staffs work with individual minority- or women-owned businesses to provide technical assistance. Officials from banks with strategies to market to minority- and women- owned businesses said that they faced some challenges in implementing such programs. Many of the bank officials told us that it was time- consuming to train their staff to reach out to minority- and women-owned businesses and provide technical assistance to these potential business customers. In addition, an official from a bank said that Regulation B limited the bank’s ability to measure the success of its outreach efforts. The official said that because of Regulation B the bank could only estimate the success of its efforts using estimates of the number of loans it made to minority- and women-owned businesses. We requested comments on a draft of this report from the Chair, U.S. Equal Employment Opportunity Commission (EEOC). We received technical comments from EEOC and incorporated their comments into this report as appropriate. We also requested comments on selected excerpts of a draft of this report from 12 industry trade associations, federal agencies, and organizations that examine access to capital issues. We received technical comments from 4 of the 12 associations, agencies, and organizations and incorporated their comments into this report as appropriate. The remaining eight either informed us that they had “no comments” or did not respond to our request. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Senate Committee on Banking, Housing, and Urban Affairs. We also will send copies to the Chair of EEOC, the Administrator of SBA, and the Secretary of the Department of Commerce, among others, and will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-8678 or at williamso@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix IV. The objectives of our report were to discuss (1) what the available data show regarding diversity at the management level in the financial services industry, from 1993 through 2004; (2) the types of initiatives that the financial services industry and related organizations have taken to promote workforce diversity and the challenges involved; and (3) the ability of minority and women-owned businesses to obtain access to capital in financial markets and initiatives financial institutions have recently taken to make capital available to these businesses. To address objective one, we requested Employer Information Reports (EEO-1) data from the Equal Employment Opportunities Commission (EEOC) for the financial services industry. The EEO-1 data, which is reported annually generally by firms with 100 or more employees, provides information on race/ethnicity and gender for various occupations, within various industries, including financial services. We used the racial/ethnic groups specified by EEOC; whites, not of Hispanic origin (whites); Asians or Pacific Islanders (Asians); Blacks, not of Hispanic origin (African- Americans); Hispanics or Latinos (Hispanics); and American Indians or Alaskan Natives (American Indians) for our analysis. The EEO-1 occupations are officials and managers, professional, technicians, sales workers, clerical workers, and others. The other category includes laborers, craft workers, operatives, and service workers. We defined the financial services industry to include the following five sectors: depository credit institutions (including commercial banks), holdings and trusts (including investment companies), non-depository credit institutions (such as mortgage bankers), securities firms, and insurance (carriers and agents). We also requested and analyzed EEO-1 data for the accounting industry. We chose to use the EEO-1 database because it is was designed to provide information on representation by a variety of groups within a range of occupations and industries, covered many employers, and had been collected in a standardized fashion for many years. Although the EEO-1 data generally do not capture information from small businesses with less than 100 employees, we believe, due to their annual mandatory reporting, they allow us to characterize the financial services industry of firms with 100 or more employees. We also corroborated the EEO-1 data with other available studies, particularly a 2005 study by the Securities Industry Association on diversity within the securities sector. We did consider other sources of data besides EEO-1, but chose not to use them for a variety of reasons including their being more limited or less current. We requested and analyzed the EEO-1 data, focusing on the “officials and managers” category, for the years 1993, 1998, 2000, and 2004 for financial services firms having 100 or more employees. We compared that data from the selected years to determine how the composition of management-level staff had changed since 1993. We also analyzed the data based on the number of employees in the firm or firm size. The four firm size categories we used were 100 or more employees, 100-249 employees, 250-999 employees, and 1,000 or more employees. We also requested EEO-1 data for the accounting industry for 2004, and therefore did not perform a trend analysis. The scope of our work did not include developing appropriate benchmarks to assess the extent of workforce diversity within the financial services industry. EEOC collects EEO-1 data from companies in a manner that allowed us to specify our data request and analysis by financial sector (e.g., commercial banking or securities). EEOC assigns each firm a code based on its primary activity (referred to as the North American Industry Classification System or the Standard Industrial Classification ). For example, a commercial bank will have a specific code denoting commercial banking, whereas a securities firm would have its own securities code. In addition, EEOC assigns codes to companies and their subsidiaries based on their primary line of business. For example, a commercial bank with an insurance subsidiary would have a separate code for that subsidiary. By requesting the EEO-1 data by the relevant codes, we were able to separate the different financial services businesses within a firm and then aggregate the data by sector. Although the NAICS replaced the SIC in 1997, EEOC staff are to assign both codes to each firm that existed prior to 2002 to ensure consistency. We conducted a limited analysis to assess the reliability of the EEO-1 data. To do so, we interviewed EEOC officials regarding how the data are collected and verified as well as to identify potential data limitations. EEOC has conducted a series of data reliability analyses for EEO-1 data to verify the consistency of the data over time. For example, EEOC reviewed the 2003 EEO-1 data for its report on diversity in the financial services industry. As part of this review, EEOC deleted 81 of the 13,000 establishments because the data for the deleted establishments were not consistent year to year. The EEOC staff do not verify the EEO-1 data, which are self-reported by firms, but they do review the trends of the data submitted. For example, EEOC staff look for major fluctuations in job classifications within an industry. On the basis of this analysis, we concluded that the EEO-1 data are sufficiently reliable for our purposes. To address objective two, we interviewed a range of financial services firms, including commercial banks and securities firms. We also interviewed representatives from a large accounting firm to discuss workforce diversity in the accounting industry. We chose these firms for a variety of reasons including whether they have ever received public recognition of their diversity programs or on the basis of recommendations from industry officials. We also interviewed representatives from industry trade organizations such as the American Bankers Association, the Securities Industry Association, the Independent Insurance Agents and Brokers of America, the American Institute of Certified Public Accountants, and Catalyst, which is a private research firm. We reviewed the trade organizations’ available studies and reports to document the state of diversity within the different sectors of the financial services industry. In addition, we reviewed publicly available data on firms’ programs by searching their Web sites. We also interviewed representatives of federal agencies such as the Bureau of Labor Statistics of the Department of Labor, the Minority Business Development Agency of the Department of Commerce, the Small Business Administration, and federal bank regulators. Additionally, we collected and analyzed demographic data on enrollment in accredited Masters of Business Administration (MBA) programs from Association to Advance Collegiate Schools of Business and MBA graduation data from the Graduate Management Admissions Council®. To address objective three, we reviewed 20 available studies and reports from federal agencies, such as the Small Business Administration and the Minority Business Development Agency, and academic studies on the ability of minority- and women-owned businesses to access credit. We also interviewed officials from banks, investment firms and private equity/venture capital firms to discuss their initiatives to provide capital to minority- and women-owned businesses. Moreover, we interviewed officials from organizations that represent minority- and women-owned businesses such as the U.S. Hispanic Chamber of Commerce, the Pan Asian American Chamber of Commerce, National Black Chamber of Commerce, and the National Association of Women Business Owners. In addition, we interviewed officials from organizations that examine access to capital issues, such as the Milken Institute and the Kauffman Foundation. We conducted our work from July 2005 to May 2006 in Washington, D.C., and New York City and in accordance with generally accepted government auditing standards. This appendix provides Employer Information Report (EEO-1) data on the number of employees within the financial services industry by position (see fig. 6) and more specific breakouts of the various racial/ethnic groups by position (see fig. 7). This appendix discusses workforce diversity of management-level positions in the accounting industry for 2004 as depicted by Employer Information Report (EEO-1) data. Additionally, it describes the findings of a report by the American Institute of Certified Public Accountants (AICPA) that assessed diversity within the accounting industry in a broad range of positions. Finally, the appendix summarizes efforts by AICPA and a large accounting firm to increase diversity in key positions. According to the 2004 EEO-1 data, minorities held 13.5 percent (5.9 percent for minority women and 7.7 percent for minority men) of all “officials and managers” positions, white women held 32.4 percent while white men held 54.1 percent of all official and manager positions in the accounting industry (see fig. 8). Contrary to the financial services sector where diversity among firms generally did not vary by firm size, EEO-1 data also show that larger accounting firms are in general more diverse than smaller firms. For example, minorities accounted for 17.8 percent of all officials and managers in accounting firms with 1,000 or more employees. For firms with 100 to 249 employees, minority representation for officials and managers accounted for 10.1 percent. Within the minority category in the accounting industry, EEO-1 2004 data show that Asians held 7.3 percent of all management-level positions, which is more than the representation of African-Americans (3.0 percent) and Hispanics (3.0 percent) combined (see fig. 9). AICPA’s 2005 demographic study showed that, in 2004, minorities represented 10 percent of all professional staff, 8 percent of all certified public accountants (CPA), and 5 percent of all partners/owners employed by CPA firms. Correspondingly, the representation of whites among professional staff, CPAs, and the partner/owner level at accounting firms were all at 89 percent or above (see table 2). In addition, consistent to the 2004 EEO-1 data for the accounting industry, the AICPA study found that the largest CPA firms were, in general, the most ethnically and racially diverse (see table 3). According to officials from AICPA and a large accounting firm we spoke with, one reason for the lack of diversity in key positions in the industry is that relatively few racial/ethnic minorities take the CPA exam and thus relatively few minorities are CPAs. According to the 2004 congressional testimony of an accounting professor, passing the CPA exam is critical for achieving senior management-level positions in the accounting industry. According to officials we spoke with from AICPA and an accounting firm, similar to the financial services industry, the accounting industry had also initiated programs to promote the diversity of its workforce. An official from the large accounting firm we spoke with told us that his firm’s top management is committed to workforce diversity and has implemented a minority leadership development program, which ensures that minorities and women become eligible for and are recommended for progressively more senior positions. As part of the commitment to workforce diversity, the firm also has a mentoring program, which pairs current partners with senior management-level minority and women staff to help them achieve partnership status. In addition, the firm also requires middle- and high- level managers to undergo diversity training to encourage an open dialogue around racial-ethnic and gender issues. An AICPA official said the organization formed a minority initiatives committee to promote workforce diversity with a number of initiatives to increase the number of minority accounting degree holders, such as scholarships for minority accounting students and accounting faculty development programs. AICPA also formed partnerships with several national minority accounting organizations such as the National Association of Black Accountants and the Association of Latino Professionals in Finance and Accounting to develop new programs to foster diversity within the workplace and the community. In addition to the individual named above, Wesley M. Phillips, Assistant Director; Emily Chalmers; William Chatlos; Kimberly Cutright; Simin Ho; Marc Molino; Robert Pollard; LaSonya Roberts; and Bethany Widick made key contributions to this report.
During a hearing in 2004 on the financial services industry, congressional members and witnesses expressed concern about the industry's lack of workforce diversity, particularly in key management-level positions. Witnesses stated that financial services firms (e.g., banks and securities firms) had not made sufficient progress in recruiting and promoting minority and women candidates for management-level positions. Concerns were also raised about the ability of minority-owned businesses to raise capital (i.e., debt or equity capital). GAO was asked to provide an overview on the status of diversity in the financial services industry. This report discusses (1) what available data show regarding diversity at the management level in the financial services industry from 1993 through 2004, (2) the types of initiatives that financial firms and related organizations have taken to promote workforce diversity and the challenges involved, and (3) the ability of minority- and women-owned businesses to obtain access to capital in financial markets and initiatives financial institutions have taken to make capital available to these businesses. Between 1993 through 2004, overall diversity at the management level in the financial services industry did not change substantially, but increases in representation varied by racial/ethnic minority group. During that period, Equal Employment Opportunity Commission (EEOC) data show that management-level representation by minority men and women increased from 11.1 percent to 15.5 percent. Specifically, African-Americans increased their representation from 5.6 percent to 6.6 percent, Asians from 2.5 percent to 4.5 percent, Hispanics from 2.8 percent to 4.0 percent, and American Indians from 0.2 percent to 0.3 percent. The EEOC data also show that representation by white women remained constant at slightly more than one-third whereas representation by white men declined from 52.2 percent to 47.2 percent. Financial services firms and trade groups GAO contacted stated that they have initiated programs to increase workforce diversity, including in management-level positions, but these initiatives face challenges. The programs include developing scholarships and internships, establishing programs to foster employee retention and development, and linking managers' compensation with their performance in promoting a diverse workforce. However, firm officials said that they still face challenges in recruiting and retaining minority candidates. Some officials also said that gaining employees' "buy-in" to diversity programs was a challenge, particularly among middle managers who were often responsible for implementing key aspects of such programs. Research reports suggest that minority- and women-owned businesses have generally faced difficulties in obtaining access to capital for several reasons such as these businesses may be concentrated in service industries and lack assets to pledge as collateral. Other studies suggest that lenders may discriminate in providing credit, but assessing lending discrimination may be complicated by limited data availability. However, some financial institutions, primarily commercial banks, said that they have developed strategies to serve minority- and women-owned businesses. These strategies include marketing existing financial products specifically to minority and women business owners.
To be eligible for the 7(a) loan program, a business must be an operating for-profit small firm (according to SBA’s size standards) located in the United States. To determine whether a business qualifies as small for the purposes of the 7(a) program, SBA uses size standards that it has established for each industry. SBA relies on the lenders that process and service 7(a) loans to ensure that borrowers meet the program’s eligibility requirements. In addition, lenders must certify that small businesses meet the “credit elsewhere” requirement. SBA does not extend credit to businesses if the financial strength of the individual owners or the firm itself is sufficient to provide or obtain all or part of the financing the firm needs or if the business can access conventional credit. To certify borrowers as having met the credit elsewhere requirement, lenders must first determine that the firm’s owners are unable to provide the desired funds from their personal resources. Second, lenders must determine that the business cannot secure the desired credit for similar purposes and the same period of time on reasonable terms and conditions from nonfederal sources (lending institutions) without SBA assistance, taking into account the prevailing rates and terms in the community or locale where the firm conducts business. According to SBA’s fiscal year 2003-2008 Strategic Plan, the agency’s mission is to maintain and strengthen the nation’s economy by enabling the establishment and viability of small businesses and by assisting in the economic recovery of communities after disasters. SBA describes the 7(a) program as contributing to an agencywide goal to “increase small business success by bridging competitive opportunity gaps facing entrepreneurs.” As reported annually in SBA’s Performance and Accountability Reports (PAR), the 7(a) program contributes to this strategic goal by fulfilling each of the following three long-term, agencywide objectives: increasing the positive impact of SBA assistance on the number and success of small business start-ups, maximizing the sustainability and growth of existing small businesses that receive SBA assistance, and significantly increasing successful small business ownership within segments of society that face special competitive opportunity gaps. Groups facing these special competitive opportunity gaps include those that SBA considers to own and control little productive capital and to have limited opportunities for small business ownership (such as African Americans, American Indians, Alaska Natives, Hispanics, Asians, and women) and those that are in certain rural or low-income areas. For each of its three long-term objectives, SBA collects and reports on the number of loans approved, the number of loans funded (i.e., money that was disbursed), and the number of firms assisted. Loan guarantee programs can result in subsidy costs to the federal government, and the Federal Credit Reform Act of 1990 (FCRA) requires, among other things, that agencies estimate the cost of these programs— that is, the cost of the loan guarantee to the federal government. In recognizing the difficulty of estimating credit subsidy costs and acknowledging that the eventual cost of the program may deviate from initial estimates, FCRA requires agencies to make annual revisions (reestimates) of credit subsidy costs for each cohort of loans made during a given fiscal year using new information about loan performance, revised expectations for future economic conditions and loan performance, and improvements in cash flow projection methods. These reestimates represent additional costs or savings to the government and are recorded in the budget. FCRA provides that reestimates that increase subsidy costs (upward reestimates), when they occur, be funded separately with permanent indefinite budget authority. In contrast, reestimates that reduce subsidy costs (downward reestimates) are credited to the Treasury and are unavailable to the agency. In addition, FCRA does not count administrative expenses against the appropriation for credit subsidy costs. Instead, administrative expenses are subject to separate appropriations and are recorded each year as they are paid, rather than as loans are originated. The legislative basis for the 7(a) program recognizes that the conventional lending market is the principal source of financing for small businesses and that the loan assistance that SBA provides is intended to supplement rather than compete with that market. The design of the 7(a) program has SBA collaborating with the conventional market in identifying and supplying credit to small businesses in need of assistance. Specifically, we highlight three design features of the 7(a) program that help it address concerns identified in its legislative history. First, the loan guarantee, which plays the same role as collateral, limits the lender’s risk in extending credit to a small firm. Second, the “credit elsewhere” requirement is intended to provide some assurance that guaranteed loans are offered only to firms that are unable to access credit on reasonable terms and conditions in the conventional lending market. Third, an active secondary market for the guaranteed portion of a 7(a) loan allows lenders to sell the guaranteed portion of the loan to investors, providing additional liquidity that lenders can use for additional loans. Furthermore, numerous amendments to the Small Business Act and to the 7(a) program have laid the groundwork for broadening small business ownership among certain groups, including veterans, handicapped individuals, and women, as well as among persons from historically disadvantaged groups, such as African Americans, Hispanic Americans, Native Americans, and Asian Pacific Americans. The 7(a) program also includes provisions for extending financial assistance to small businesses that are located in urban or rural areas with high proportions of unemployed or low-income individuals or that are owned by low-income individuals. The program’s legislative history highlights its role in, among other things, helping small businesses get started, allowing existing firms to expand, and enabling small businesses to develop foreign markets for their products and services. All nine performance measures we reviewed provided information that related to the 7(a) loan program’s core activity, which is to provide loan guarantees to small businesses. In particular, the indicators all provided the number of loans approved, loans funded, and firms assisted across the subgroups of small businesses the 7(a) program was intended to assist. We have stated in earlier work that a clear relationship should exist between an agency’s long-term strategic goals and its program’s performance measures. Outcome-based goals or measures showing a program’s impact on those it serves should be included in an agency’s performance plan whenever possible. However, all of the 7(a) program’s performance measures are primarily output measures. SBA does not collect any outcome-based information that discusses how well firms are doing after receiving a 7(a) loan. Further, none of the measures link directly to SBA’s long-term objectives. As a result, the performance measures do not fully support SBA’s strategic goal of increasing the success of small businesses by “bridging competitive opportunity gaps facing entrepreneurs.” SBA officials have recognized the importance of developing performance measures that better assess the 7(a) program’s impact on the small firms that receive the guaranteed loans. SBA is still awaiting a final report, originally expected sometime during the summer of 2007, from the Urban Institute, which has been contracted to undertake several evaluative studies of various SBA programs, including 7(a), that provide financial assistance to small businesses. SBA officials explained that, for several reasons, no formal decision had yet been made about how the agency might alter or enhance the current set of performance measures to provide more outcome-based information related to the 7(a) program. The reasons given included the agency’s reevaluation of its current strategic plan in response to requirements in the Government Performance and Results Act of 1993 that agencies reassess their strategic plans every 3 years, a relatively new administrator who may make changes to the agency’s performance measures and goals, and the cost and legal constraints associated with the Urban Institute study. However, SBA already collects information showing how firms are faring after they obtain a guaranteed loan. In particular, SBA regularly collects information on how well participating firms are meeting their loan obligations. This information generally includes, among other things, the number of firms that have defaulted on or prepaid their loans—data that could serve as reasonable proxies for determining a firm’s financial status. However, the agency primarily uses the data to estimate some of the costs associated with the program and for internal reporting purposes, such as monitoring participating lenders and analyzing its current loan portfolio. Using this information to expand its performance measures could provide SBA and others with helpful information about the financial status of firms that have been assisted by the 7(a) program. To better ensure that the 7(a) program is meeting its mission responsibility of helping small firms succeed through guaranteed loans, we recommended in our report that SBA complete and expand its current work on evaluating the 7(a) program’s performance measures. As part of this effort, we indicated that, at a minimum, SBA should further utilize the loan performance information it already collects, including but not limited to defaults, prepayments, and number of loans in good standing, to better report how small businesses fare after they participate in the 7(a) program. In its written response, SBA concurred with our recommendation. We found limited information from economic studies that credit constraints such as credit rationing could have some effect on small businesses in the conventional lending market. Credit rationing, or denying loans to creditworthy individuals and firms, generally stems from lenders’ uncertainty or lack of information regarding a borrower’s ability to repay debt. Economic reasoning suggests that there exists an interest rate—that is, the price of a loan—beyond which banks will not lend, even though there may be creditworthy borrowers willing to accept a higher interest rate. Because the market interest rate will not climb high enough to convince lenders to grant credit to these borrowers, these applicants will be unable to access credit and will also be left out of the lending market. Of the studies we identified that empirically looked for evidence of this constraint within the conventional U.S. lending market, almost all provided some evidence consistent with credit rationing. For example, one study found evidence of credit rationing across all sizes of firms. However, another study suggested that the effect of credit rationing on small firms was likely small, and another study suggested that the impact on the national economy was not likely to be significant. Because the underlying reason for having been denied credit can be difficult to determine, true credit rationing is difficult to measure. In some studies we reviewed, we found that researchers used different definitions of credit rationing, and we determined that a broader definition was more likely to yield evidence of credit rationing than a narrower definition. For example, one study defined a firm facing credit rationing if it had been denied a loan or discouraged from applying for credit. However, another study pointed out that firms could be denied credit for reasons other than credit rationing—for instance, for not being creditworthy. Other studies we reviewed that studied small business lending found evidence of credit rationing by testing whether the circumstances of denial were consistent with a “credit rationing” explanation such as a lack of information. Two studies concluded that having a preexisting relationship with the lender had a positive effect on the borrower’s chance of obtaining a loan. The empirical evidence from another study suggested that lenders used information accumulated over the duration of a financial relationship with a borrower to define loan terms. This study’s results suggested that firms with longer relationships received more favorable terms—for instance, they were less likely to have to provide collateral. Because having a relationship with a borrower would lead to the lender’s having more information, the positive effect of a preexisting relationship is consistent with the theory behind credit rationing. However, the studies we reviewed regarding credit rationing used data from the early 1970s through the early 1990s and thus did not account for several recent trends that may have impacted, either positively or negatively, the extent of credit rationing within the small business lending market. These trends include, for example, the increasing use of credit scores, changes to bankruptcy laws, and consolidation in the banking industry. Discrimination on the basis of race or gender may also cause lenders to deny loans to potentially creditworthy firms. Discrimination would also constitute a market imperfection, because lenders would be denying credit for reasons other than interest rate or another risk associated with the borrower. A 2003 survey of small businesses conducted by the Federal Reserve examined differences in credit use among racial groups and between genders. The survey found that 48 percent of small businesses owned by African Americans and women and 52 percent of those owned by Asians had some form of credit, while 61 percent of white- and Hispanic-owned businesses had some form of credit. Studies have attempted to determine whether such disparities are due to discrimination, but the evidence from the studies we reviewed was inconclusive. Certain segments of the small business lending market received a higher share of 7(a) loans than of conventional loans between 2001 to 2004, including minority-owned businesses and start-up firms. More than a quarter of 7(a) loans went to small businesses with minority ownership, compared with an estimated 9 percent of conventional loans (fig. 1). However, in absolute numbers many more conventional loans went to the segments of the small business lending market we could measure, including minority-owned small businesses, than loans with 7(a) guarantees. Compared with conventional loans, a higher percentage of 7(a) loans went to small new (that is, start-up) firms from 2001 through 2004 (fig. 2). Specifically, 25 percent of 7(a) loans went to small business start-ups, in contrast to an estimated 5 percent of conventional loans that went to newer small businesses over the same period. Only limited differences exist between the shares of 7(a) and conventional loans that went to other types of small businesses from 2001 through 2004. For example, 22 percent of all 7(a) loans went to small women-owned firms, compared with an estimated 16 percent of conventional loans that went to these firms. The percentages of loans going to firms owned equally by men and women were also similar—17 percent of 7(a) loans and an estimated 14 percent of conventional loans (fig. 3). However, these percentages are small compared with those for small firms headed by men, which captured most of the small business lending market from 2001 to 2004. These small businesses received 61 percent of 7(a) loans and an estimated 70 percent of conventional loans. Similarly, relatively equal shares of 7(a) and conventional loans reached small businesses in economically distressed neighborhoods (i.e., zip code areas) from 2001 through 2004—14 percent of 7(a) loans and an estimated 10 percent of conventional loans. SBA does not specifically report whether a firm uses its 7(a) loan in an economically distressed neighborhood but does track loans that go to firms located in areas it considers “underserved” by the conventional lending market. SBA’s own analysis found that 49 percent of 7(a) loans approved and disbursed in fiscal year 2006 went to these geographic areas. A higher proportion of 7(a) loans (57 percent) went to smaller firms (that is, firms with up to five employees), compared with an estimated 42 percent of conventional loans. As the number of employees increased, differences in the proportions of 7(a) and conventional loans to firms with similar numbers of employees decreased. Also, similar proportions of 7(a) and conventional loans went to small businesses with different types of organizational structures and in different geographic locations. Our analysis of information on the credit scores of small businesses that accessed credit without SBA assistance showed only limited differences between these credit scores and those of small firms that received 7(a) loans. As reported in a database developed by two private business research and information providers, The Dun & Bradstreet Corporation and Fair Isaac Corporation (D&B/FIC), the credit scores we compared are typically used to predict the likelihood that a borrower, in this case a small business, will repay a loan. In our comparison of firms that received 7(a) loans and those that received conventional credit, we found that for any particular credit score band (e.g., 160 to <170) the differences were no greater than 5 percentage points. The average difference for these credit score bands was 1.7 percentage points (fig. 4). More credit scores for 7(a) borrowers were concentrated in the lowest (i.e., more risky) bands compared with general borrowers, but most firms in both the 7(a) and the D&B/FIC portfolios had credit scores in the same range (from 170 to <200). Finally, the percentage of firms that had credit scores in excess of 210 was less than 1 percent for both groups. The results our analysis of credit scores should be interpreted with some caution. First, the time periods for the two sets of credit scores are different. Initial credit scores for businesses receiving 7(a) loans in our analysis are from 2003 to 2006. The scores developed by D&B/FIC for small businesses receiving conventional credit are based on data from 1996 through 2000 that include information on outstanding loans that may have originated during or many years before that period. Second, D&B/FIC’s scores for small businesses receiving conventional loans may not be representative of the population of small businesses. Although D&B/FIC combined hundreds of thousands of financial records from many lenders and various loan products with consumer credit data for their credit score development sample, they explained that the sample was not statistically representative of all small businesses. Another score developed by D&B, called the Financial Stress Score (FSS), gauges the likelihood that a firm will experience financial stress—for example, that it will go out of business. SBA officials said that based on analyses of these scores, the difference in the repayment risk of lending associated with 7(a) loans was higher than the risk posed by small firms able to access credit in the conventional lending market. According to an analysis D&B performed based on these scores, 32 percent of 7(a) firms showed a moderate to high risk of ceasing operations with unpaid obligations in 2006, while only 17 percent of general small businesses had a similar risk profile. As already mentioned, SBA disagreed with the results of our credit score comparison. In its written comments to our prior report, SBA primarily reiterated the cautions included in our report and stated that the riskiness of a portfolio was determined by the distribution in the riskier credit score categories. SBA said that it had not worked out the numbers but had concluded that the impact on loan defaults of the higher share of 7(a) loans in these categories would not be insignificant. Although SBA disagreed with our results, we believe that our analysis of credit scores provides a reasonable basis for comparison. Specifically, the data we used were derived from a very large sample of financial transactions and consumer credit data and reflected the broadest and most recent information readily available to us on small business credit scores in the conventional lending market. As SBA noted in its comments, we disclosed the data limitations and necessary cautions to interpreting the credit score comparison. Taking into consideration the limitations associated with our analysis, future comparisons of comparable credit score data for small business borrowers may provide SBA with a more conclusive picture of the relative riskiness of borrowers with 7(a) and conventional loans, which would also be consistent with the intent of our recommendation that SBA develop more outcome-based performance measures. We also compared some of the characteristics of 7(a) and conventional loans, including the size of the loans. In the smallest loan categories (less than $50,000), a higher percentage of total conventional loans went to small businesses—53 percent, compared with 39 percent of 7(a) loans. Conversely, a greater percentage of 7(a) loans than conventional loans were for large dollar amounts. For example, 61 percent of the number of 7(a) loans had dollar amounts in the range of more than $50,000 to $2 million (the maximum 7(a) loan amount), compared with an estimated 44 percent of conventional loans (fig. 5). Further, almost all 7(a) loans had variable interest rates and maturities that tended to exceed those for conventional loans. Nearly 90 percent of 7(a) loans had variable rates compared with an estimated 43 percent of conventional loans, and almost 80 percent of 7(a) loans had maturities of more than 5 years, compared with an estimated 17 percent of conventional loans (fig. 6). For loans under $1 million, interest rates were generally higher for 7(a) loans than for conventional loans. From 2001 through 2004, quarterly interest rates for the 7(a) program were, on average, an estimated 1.8 percentage points higher than interest rates for conventional loans (fig. 7). Interest rates for small business loans offered in the conventional market tracked the prime rate closely and were, on average, an estimated 0.4 percentage points higher. Because the maximum interest rate allowed by the 7(a) program was the prime rate plus 2.25 percent or more, over the period the quarterly interest rate for 7(a) loans, on average, exceeded the prime rate. The current reestimated credit subsidy costs of 7(a) loans made during fiscal years 1992 through 2004 generally are lower than the original estimates, which are made at least a year before any loans are made for a given fiscal year. Loan guarantees can result in subsidy costs to the federal government, and the Federal Credit Reform Act of 1990 (FCRA) requires, among other things, that agencies estimate the cost of the loan guarantees to the federal government and revise its estimates (reestimate) those costs annually as new information becomes available. The credit subsidy cost is often expressed as a percentage of loan amounts—that is, a credit subsidy rate of 1 percent indicates a subsidy cost of $1 for each $100 of loans. As we have seen, the original credit subsidy cost that SBA estimated for fiscal years 2005 and 2006 was zero, making the 7(a) program a “zero credit subsidy” program—that is, the program no longer required annual appropriations of budget authority. For loans made in fiscal years 2005 and 2006, SBA adjusted the ongoing servicing fee that it charges participating lenders so that the initial subsidy estimate would be zero based on expected loan performance at that time. Although the federal budget recognizes costs as loans are made and adjusts them throughout the lives of the loans, the ultimate cost to taxpayers is certain only when none of the loans in a cohort remain outstanding and the agency makes a final, closing reestimate. In addition to the subsidy costs, SBA incurs administrative expenses for operating the loan guarantee program, though these costs are appropriated separately from those for the credit subsidy. In its fiscal year 2007 budget request, SBA requested nearly $80 million to cover administrative costs associated with the 7(a) program. Any forecasts of the expected costs of a loan guarantee program such as 7(a) are subject to change, since the forecasts are unlikely to include all the changes in the factors that can influence the estimates. In part, the estimates are based on predictions about borrowers’ behavior—how many borrowers will pay early or late or default on their loans and at what point in time. According to SBA officials, loan defaults are the factor that exerts the most influence on the 7(a) credit subsidy cost estimates and are themselves influenced by various economic factors, such as the prevailing interest rates. Since the 7(a) program primarily provides variable rate loans, changes in the prevailing interest rates would result in higher or lower loan payments, affecting borrowers’ ability to pay and subsequently influencing default and prepayment rates. For example, if the prevailing interest rates fall, more firms could prepay their loans to take advantage of lower interest rates, resulting in fewer fees for SBA. Loan defaults could also be affected by changes in the national or a regional economy. Generally, as economic conditions worsen—for example, as unemployment rises—loan defaults increase. To the extent that SBA cannot anticipate these changes in the initial estimates, it would include them in the reestimates. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the Subcommittee may have. For additional information about this testimony, please contact William B. Shear at (202) 512-8678 or Shearw@gao.gov. Contact points for our Offices of Congressional Affairs and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony included Benjamin Bolitzer, Emily Chalmers, Tania Calhoun, Daniel Garcia-Diaz, Lisa Mirel, and Mijo Vodopic. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Small Business Administration's (SBA) 7(a) program, initially established in 1953, provides loan guarantees to small businesses that cannot obtain credit in the conventional lending market. In fiscal year 2006, the program assisted more than 80,000 businesses with loan guarantees of nearly $14 billion. This testimony, based on a 2007 report, discusses (1) the 7(a) program's purpose and the performance measures SBA uses to assess the program's results; (2) evidence of any market constraints that may affect small businesses' access to credit in the conventional lending market; (3) the segments of the small business lending market that were served by 7(a) loans and the segments that were served by conventional loans; and (4) 7(a) program's credit subsidy costs and the factors that may cause uncertainty about these costs. As the 7(a) program's underlying statutes and legislative history suggest, the loan program's purpose is intended to help small businesses obtain credit. The 7(a) program's design reflects this legislative history, but the program's performance measures provide limited information about the impact of the loans on participating small businesses. As a result, the current performance measures do not indicate how well SBA is meeting its strategic goal of helping small businesses succeed. The agency is currently undertaking efforts to develop additional, outcome-based performance measures for the 7(a) program, but agency officials said that it was not clear when they might be introduced or what they might measure. Limited evidence from economic studies suggests that some small businesses may face constraints in accessing credit because of imperfections such as credit rationing, in the conventional lending market. Several studies GAO reviewed generally concluded that credit rationing was more likely to affect small businesses because lenders could face challenges in obtaining enough information on these businesses to assess their risk. However, the studies on credit rationing were limited, in part, because the literature relies on data from the early 1970s through the early 1990s, which do not account for recent trends in the small business lending market, such as the increasing use of credit scores. Though researchers have noted disparities in lending options among different races and genders, inconclusive evidence exists as to whether discrimination explains these differences. 7(a) loans went to certain segments of the small business lending market in higher proportions than conventional loans. For example, from 2001 to 2004 25 percent of 7(a) loans went to small business start-ups compared to an estimated 5 percent of conventional loan. More similar percentages of 7(a) and conventional loans went to other market segments; 22 percent of 7(a) loans went to women-owned firms in comparison to an estimated 16 percent of conventional loans. The characteristics of 7(a) and conventional loans differed in several key respects: 7(a) loans typically were larger and more likely to have variable rates, longer maturities, and higher interest rates. SBA's most recent reestimates of the credit subsidy costs for 7(a) loans made during fiscal years 1992 through 2004 indicate that, in general, the long-term costs of these loans would be lower than initially estimated. SBA makes its best initial estimate of the 7(a) program's credit subsidy costs and revises the estimate annually as new information becomes available. In fiscal years 2005 and 2006, SBA estimated that the credit subsidy cost of the 7(a) program would be equal to zero--that is, the program would no longer require annual appropriations of budget authority--by, in part, adjusting fees paid by lenders. However, the most recent reestimates, including those made since 2005, may change because of the inherent uncertainties of forecasting subsidy costs and the influence of economic conditions such as interest rates on several factors, including loan defaults and prepayment rates.
The National School Lunch Program was established in 1946 by the National School Lunch Act and is intended to safeguard the health and well-being of the nation’s children. The program provides nutritionally balanced low-cost or free lunches to children in public and nonprofit private schools and residential child care institutions. In fiscal year 2012, the federal government spent $11.6 billion on the National School Lunch Program, which served lunches to 31.6 million children on average each month. The school lunch program is overseen by USDA’s Food and Nutrition Service (FNS) through its headquarters and regional offices and is administered through state agencies and local SFAs (see fig. 1). FNS defines program requirements and provides reimbursements to states for lunches served. FNS also provides states with commodities—foods produced in the United States that are purchased by USDA and provided to SFAs—based on the number of lunches served. States have written agreements with SFAs to administer the meal programs, and states provide federal reimbursements to SFAs and oversee their compliance with program requirements. SFAs plan, prepare, and serve meals to students in schools. Although federal requirements for the content of school lunches have existed since the National School Lunch Program’s inception in 1946, as research has documented changes in the diets of Americans and the increasing incidence of overweight and obesity in the United States, the federal government has taken steps to improve the nutritional content of lunches. Specifically, since 1994, federal law has required SFAs to serve school lunches that are consistent with the Dietary Guidelines for Americans. In 2004, federal law required USDA to issue rules providing SFAs with specific recommendations for lunches consistent with the most recently published version of the Guidelines. As a result of that requirement, USDA asked the Institute of Medicine to review the food and nutritional needs of school-aged children in the United States using the 2005 Dietary Guidelines for Americans and provide recommended revisions to meal requirements for the National School Lunch Program. The Healthy, Hunger-Free Kids Act of 2010 required USDA to update federal requirements for the content of school lunches based on the Institute of Medicine’s recommendations, which were published in 2010. USDA issued final regulations that made changes to many of the lunch content and nutrition requirements in January 2012 and required that many of the new lunch requirements be implemented beginning in school year 2012-2013. (See fig. 2.) Regarding the lunch components—fruits, vegetables, meats, grains, and milk—lunches must now include fat-free or low-fat milk, limited amounts of meats/meat alternates and grains, and whole grain-rich foods. Further, lunches must now include both fruit and vegetable choices, and although students may be allowed to decline two of the five lunch components they are offered, they must select at least one half cup of fruits or vegetables as part of their meal. (See fig. 3 for examples of lunches with three and five components.) Regarding the nutrition standards, the regulations now include maximum calorie levels for lunches, require that lunches include no trans fat, and set future targets to reduce sodium in lunches. In addition to changes to the content of lunches, regulations also required that all SFAs use the same approach for planning lunch menus—Food-Based Menu Planning. This approach involves providing specific food components in specific quantities, where previously districts could choose from a variety of approaches. Further, the new regulations require SFAs to plan menus based on one set of student grade groups—grades K-5, grades 6-8 and grades 9-12—regardless of whether their schools align with these groups. Although regulations have long adjusted lunch content requirements by student grade level, previous regulations allowed SFAs a few student grade group options from which to choose those that best aligned with their schools. In addition to changes to the content and nutrition requirements for school lunches, the Healthy, Hunger-Free Kids Act of 2010 required that USDA update the requirements for school breakfasts and establish new standards for all other foods and beverages sold in schools, which are commonly referred to as competitive foods because they compete with school meal programs. USDA’s January 2012 final regulations on the new lunch requirements also included the new breakfast requirements that are to be implemented over several school years, beginning generally in school year 2013-2014. The regulations establish three meal components for breakfast—fruit or vegetable, grain or meat, and milk—and require that breakfasts include whole grain-rich foods and only fat-free or low-fat milk. Additional changes to the previous breakfast requirements include that breakfasts must now be at or below calorie maximums and comply with limits on sodium and trans fat. Beginning in school year 2014-2015, schools must offer one cup of fruit with each breakfast each day, an increase from the previous requirement of ½ cup, though vegetables meeting specific requirements may be substituted for fruit. In addition, as with lunch, students will be required to take a fruit or vegetable as part of their meal. Separate from the lunch and breakfast regulations, USDA issued an interim final rule on the new requirements for competitive foods in June 2013 and required that they be implemented in school year 2014- 2015. Competitive foods are often sold through vending machines, school stores, and fundraisers, and also include SFA sales of a la carte items in the cafeteria. Prior to the enactment of the Healthy, Hunger-Free Kids Act of 2010, USDA’s authority to regulate competitive foods was limited to those foods sold in the food service area during meal periods. In contrast, the new regulations include nutrition requirements for all foods and beverages sold on school campuses during the school day outside of the federal school meal programs. SFAs generally determine the prices they charge for school meals, but some children are eligible to receive free or reduced-price meals. Under the National School Lunch Act, children are eligible for free meals if their families have incomes at or below 130 percent of the federal poverty guidelines and reduced-price meals if their families have incomes between 130 and 185 percent of the federal poverty guidelines. SFAs can charge a maximum of $0.40 for a reduced-price lunch. Children who are not eligible for free or reduced-price meals pay the full price charged by the SFA for the meal. However, SFAs receive federal reimbursements for all lunches served to eligible students that meet federal lunch component and nutrition requirements, regardless of whether children pay for the meals or receive them for free. The amount of federal reimbursement that SFAs receive for each meal served to a child is based on the eligibility category of the child and the proportion of the SFA’s total lunches that are served to children eligible for free and reduced-price meals. For example, in school year 2013-2014, federal reimbursements are $2.93 for each free lunch, $2.53 for each reduced-price lunch, and $0.28 for each paid lunch for SFAs with less than 60 percent of their total lunches served to children eligible for free and reduced-price meals. SFAs with a higher proportion of their total lunches served to children eligible for free and reduced-price meals may qualify for a higher per lunch reimbursement rate SFAs must comply with certain financial requirements when operating the National School Lunch Program. Specifically, the National School Lunch Act requires that SFAs operate as nonprofit entities. Federal regulations further dictate that SFAs must use all revenue for the operation or improvement of the program and generally limit their net cash resources—the cash SFAs carry in their accounts—to three months of average operating expenditures. In the event that an SFA’s resources exceed this limit, the state agency may require the SFA to invest the excess funds in the program or otherwise reduce the SFA’s account balance. The Healthy, Hunger-Free Kids Act of 2010 contained two new revenue requirements related to the prices SFAs set for paid lunches and other foods sold outside of the school meal programs. These provisions were developed, in part, because of a USDA study that found the average prices charged for paid lunches and for other foods by some SFAs were less than the cost of producing those foods. While SFAs continue to determine the price they charge for school lunches, beginning in school year 2011-2012, the Act requires SFAs to provide the same level of support for paid lunches as is provided for free lunches. If an SFA’s average paid lunch price is less than a specified amount—$2.59 for school year 2013-2014—the SFA must either increase the price it charges for paid lunches or provide non-federal funding to cover the difference. Concerning other foods sold by SFAs outside of the school meal programs, the Act requires that revenues from the sales of these foods generate at least the same proportion of SFA revenues as they contribute to SFA food costs, in effect requiring SFAs to charge prices that cover the costs of those foods. As required by the National School Lunch Act, USDA policies and regulations establish an oversight and monitoring framework for the National School Lunch Program to help ensure that meals served meet content and nutrition requirements and that SFAs follow required eligibility and financial practices and maintain sound financial health. USDA is required to review state administration of the program, and states are required to review SFA administration of the program. Although states have been required to regularly review SFA administration of the National School Lunch Program for over two decades, the Healthy, Hunger-Free Kids Act of 2010 required USDA to amend its unified accountability system to ensure SFA compliance with requirements for all school meal programs. Further, the Act requires states to review SFAs on a 3-year cycle, which is a change from the previous 5-year cycle. While USDA has not yet issued regulations on the new requirements, USDA has developed and provided states with guidance on an updated and streamlined administrative review process that changes some of the review procedures and required review areas. For example, USDA developed risk-based tools to determine the degree to which each area must be reviewed and made changes to procedures used when SFAs claim or receive federal reimbursements they are not entitled to.review certain areas and added review of SFA financial management to the administrative review process. Specifically, states are now required to Further, USDA modified the extent to which states must review SFAs’ nonprofit food service accounts, use of commodities, indirect costs, and compliance with requirements for pricing paid lunches and other foods sold outside of the school meal programs. USDA officials told us that the new administrative review process was developed with extensive input from a workgroup that included state and USDA representatives. Although a new 3-year cycle of administrative reviews began in school year 2013-2014, because federal regulations have not yet been updated to reflect the new administrative review process, USDA used its waiver authority to provide states with the flexibility to follow the new administrative review requirements or the previous requirements. For school year 2012-2013, as SFAs worked to implement the required changes to the content of school lunches, USDA established interim procedures for program oversight. Specifically, to ensure that state agencies provided training and technical assistance to help SFAs implement the changes to the content of lunches, USDA allowed states to postpone administrative reviews until school year 2013-2014.during school year 2012-2013, states were required to review documentation submitted by SFAs and certify those SFAs determined to be in compliance with the new lunch requirements. USDA also required states to conduct on-site validation reviews to a sample of at least 25 percent of certified SFAs to ensure SFA compliance. Once certified, SFAs receive an additional six cents for each reimbursable lunch served, as provided for in the Healthy, Hunger-Free Kids Act of 2010. The National School Lunch Program’s oversight and monitoring requirements are part of the program’s internal controls, which are an integral component of management. Internal control is not one event, but a series of actions and activities that occur on an ongoing basis. Effective internal controls include creating an organizational culture that promotes accountability and the reduction of errors, analyzing program operations to identify areas that present the risk of error, making policy and program changes to address the identified risks, and monitoring the results and communicating the lessons learned to support further improvement. Despite the National School Lunch Program’s oversight and monitoring requirements, the program has been found to have a relatively high incidence of program errors. For example, USDA’s most recent study of program errors found that $248 million in improper payments (3.1 percent of federal reimbursements) during school year 2005-2006 resulted from school food service staff incorrectly assessing and recording lunches eligible for federal reimbursement. At the time of the study, federal requirements for the content of lunches had been consistent for 10 years. Nationwide, participation in the National School Lunch Program declined in recent years after having increased steadily for more than a decade. According to our analysis of USDA’s data, total student participation—the total number of students who ate school lunches—dropped from school years 2010-2011 through 2012-2013 for a cumulative decline of 1.2 million students (or 3.7 percent), with the majority of the decrease occurring during school year 2012-2013. (See fig. 4.) The decrease in the total number of students eating school lunches during the last 2 school years was driven primarily by a decrease of 1.6 million students paying full price for meals, despite increases in the number of students eating school lunches who receive free meals. While the number of students who buy full-price lunches each month has been declining gradually since school year 2007-2008, the largest one-year decline—10 percent—occurred in school year 2012-2013. In contrast, the number of students participating in the program each month who receive free meals has steadily increased over the years, though the increase was much smaller in the last year. (See fig. 5.) In addition, some evidence suggests that the total number of students eating school lunches declined more in schools with older students. For example, in six of the seven SFAs we visited that provided participation details by school level, participation declined to a greater extent among older students than elementary students in school year 2012-2013. The changes in lunch program participation were likely influenced by factors that directly affected students’ eligibility for free and reduced-price school meals. Since the recent economic downturn began in late 2007, the number of children under age 18 living in poverty nationwide has increased substantially, according to data from the U.S. Census Bureau.Consistent with this shift, our analysis of USDA’s data shows that the number of students approved for free meals nationally has been increasing at a greater rate since school year 2007-2008, and the number of students required to pay full price for their lunches has been decreasing. (See fig. 6.) This was also true in the districts we visited where two SFA directors noted that the recent economic downturn likely contributed to an increase in the number of children approved for free and reduced-price meals in their districts. In addition to economic conditions, other program changes may have also influenced these trends, such as adjustments to the process for determining student eligibility for free and reduced-price school meals. Consistent with declines in the number of students participating in the lunch program, our analysis shows that the proportion of all students eating school lunches declined in school year 2012-2013. The participation rate measures the proportion of all students in schools with the National School Lunch Program who ate school lunches in each month. In school year 2012-2013, the overall participation rate declined, primarily driven by a decline in the participation rate for paid students. (See fig. 7.) In that year, the participation rate for paid students declined to approximately 38 percent—the lowest rate in over a decade. Several factors likely influenced the recent decreases in lunch participation, and while the extent to which each factor affected participation is unclear, state and local officials reported that the decreases were influenced by changes made to comply with the new lunch content and nutrition standards. Almost all states reported that student acceptance of the changes was challenging for at least some of their SFAs during school year 2012-2013, a factor that likely affected participation. All eight SFAs we visited also noted that students expressed dislike for certain foods that were served to comply with the new requirements, such as whole grain-rich products and vegetables in the beans and peas (legumes) and red-orange sub-groups, and this may have affected participation. Further, some SFAs we visited noted that negative student reactions to lunches that complied with the new meat and grain portion size limits directly affected program participation in their districts. For example, in one district, changes the SFA made to specific food items, such as sandwiches, contributed to a middle and high school boycott of school lunch by students that lasted for 3 weeks at the beginning of school year 2012-2013. During this time, participation in school lunch significantly declined in those schools. Federally-required increases in the prices of paid lunches in certain districts—also known as paid lunch equity—are another change that state and SFA officials believe likely influenced lunch participation. This requirement, included in the Healthy, Hunger-Free Kids Act of 2010, caused many SFAs to raise the price of their paid lunches beginning in school year 2011-2012. Officials from three states and four SFAs we spoke with as part of our site visits believe the price increases likely contributed to declines in the number of students buying full-price lunches. In addition, SFA officials in two districts we visited expressed concern that lunch price increases are particularly difficult for families who do not receive free or reduced-price lunches but have limited incomes, as the new prices may no longer be affordable. Further, SFA officials in two districts believed that lunch price increases, combined with the lunch content changes, led some students to stop buying school lunches because they felt they were being asked to pay more for less food. Some middle and high school students we talked to in these districts echoed this sentiment and said this combination led them to consider food options other than the school lunch program, particularly at the beginning of the 2012-2013 school year. SFA officials noted that middle and high schools are more likely to have alternatives to school lunches available, such as foods sold through vending machines, a la carte lines in the cafeteria, and fundraisers, as well as policies that allow students to purchase food off of the school campus. The reaction to the paid lunch price increases is consistent with USDA’s expectations. Prior to implementation, the department estimated that nearly all schools would need to increase their lunch prices in response to the requirements, and these increases were expected to decrease the number of students eating school lunches as they chose not to eat, brought their lunches from home, or acquired food from other sources. Although the paid lunch equity provisions were included in the Healthy, Hunger-Free Kids Act of 2010 in part to help SFAs cover the costs of the foods needed to comply with the new lunch requirements, some officials we spoke with expressed concern about the potential impact on the program if the number of students buying full-price lunches continues to decrease. Specifically, several state and SFA officials we spoke with expressed concerns that such a trend would hinder the program’s ability to improve the diet and overall health of all schoolchildren and potentially increase stigma in the cafeteria for low-income students. In the preamble to the interim rule on paid lunch equity requirements, USDA estimated that most schools affected by the requirements in school year 2011-2012 would need to increase paid lunch prices by only 5 cents in order to comply. National School Lunch Program: School Food Service Account Revenue Amendments Related to the Healthy, Hunger-Free Kids Act of 2010, 76 Fed. Reg. 35,301, 35,306 (June 17, 2011). USDA’s research suggests that a 5 cent increase in paid lunch prices results in a 0.55 percent decrease in the student participation rate. See USDA, School Nutrition Dietary Assessment Study-III (Alexandria, VA: November 2007). feedback than normal in school year 2012-2013 with concerns about the program, which some believed was in response to negative media attention. Another factor that may have affected participation is the time allotted for lunch periods, according to officials in three districts we visited. SFA officials in one district noted that some of the changes to lunches, such as the requirement that students take a fruit or vegetable with their lunches, confused staff and students and led to longer lunch lines, particularly at the beginning of school year 2012-2013. One district we visited also made significant changes to the system students use to pay for lunch, which led to longer lunch lines early in school year 2012- 2013. One SFA director noted that if the lunch lines are too long or students otherwise do not have enough time to eat, they are more likely to look elsewhere for food or not eat at all. Other decisions at the district or school level may have also affected school lunch participation in school year 2012-2013. For example, one district we visited stopped allowing high school students to leave campus during the lunch period, which the SFA director believed helped mitigate the lunch participation declines the district experienced. In addition, states reported through our survey that 321 SFAs in 42 states stopped participating in the National School Lunch Program in school year 2012-2013, which directly impacted the number of students able to participate in the program nationwide. While districts may choose to leave the program for various reasons, such as low student participation, twenty-seven of these states reported that the new lunch requirements were a factor in some SFAs’ decisions not to participate. USDA officials also noted other factors that may have influenced lunch participation, including school closures, mergers, moves, consolidation due to economic conditions, and issues with food service management companies. Although school lunch participation has declined, it is likely that participation will improve over time as students adjust to the lunch changes. Five of the districts we visited reported that, if the past is an indicator, participation will improve over time as students adjust to the new food items, and three noted the importance of nutrition education for students and parents to help make the transition to healthier school meals more successful. The SFA director in one district we visited that made changes to lunches prior to school year 2012-2013 in anticipation of the federal requirements initially experienced a decrease in participation, but saw participation recover in the following school year. Similarly, although the other seven districts we visited saw decreases in lunch program participation in the first months after implementing the new requirements in school year 2012-2013, participation increased in the majority of these districts as the school year progressed. Nationwide, fewer states expected student acceptance of the changes and palatability of foods to be challenges for SFAs in school year 2013-2014 than indicated they were challenges in school year 2012-2013, although the majority of states In four districts we visited, SFA still expected these areas to be difficult.directors noted that they had begun adding whole grains into their menus before the 2012-2013 school year, and they saw student acceptance of whole grain products improve over time. One district’s SFA director also noted that students’ willingness to eat foods in the beans and peas (legumes) sub-group has improved over time. When we talked with students in the schools we visited and asked them about the lunches, these specific foods were mentioned by some students in four of the eight districts we visited. However, at the same time, most of the students we spoke with indicated that they like to eat healthy and nutritious foods, and they think school lunches generally provide such foods. Further, although school year 2012-2013 was the first year that students nationwide were required to take a fruit or a vegetable with their school lunches, when we asked students what they liked about school lunch that year, students in 13 of the 17 schools we visited reported liking certain fruit and vegetable options. As SFAs began implementing the new lunch requirements in school year 2012-2013, they faced several challenges implementing the lunch changes. For example, most states reported that their SFAs faced challenges with plate waste—or foods thrown away rather than consumed by students—and food costs, as well as planning menus and obtaining foods that complied with the new portion size and calorie requirements. (See fig. 8.) The majority of states also reported that food service staff workload and food storage or kitchen equipment were challenges for their SFAs while implementing the new lunch requirements. The eight SFAs we visited also experienced these challenges, although at the same time, all eight expressed support for the goal of improving the nutritional quality of lunches and felt the new requirements were moving in that direction. Addressing plate waste has been a longstanding challenge in the school lunch program, and officials in six of the districts we visited told us they believe plate waste increased in school year 2012-2013 because of the new lunch requirements. Specifically, students may take the food components they are required to as part of the school lunch but then choose not to eat them. Although none of the districts we visited had fully analyzed plate waste over the past few years to determine if it changed during school year 2012-2013, SFAs we visited said that the fruits and vegetables students are now required to take sometimes end up thrown away. Consistent with this, in our lunch period observations in 7 of 17 schools, we saw many students throw away some or all of their fruits and vegetables. However, in the other 10 schools, we saw students take and eat sizable quantities of fruits and vegetables and the other lunch components, resulting in minimal plate waste. Four of the eight SFAs we visited mentioned that plate waste was more of an issue with the youngest elementary school students, possibly because of the amount of food served with the lunch and the amount of time they have to eat it. The Institute of Medicine report that recommended the new lunch requirements acknowledged differences in food intake among elementary students. The report noted that the amounts of food offered under the new recommendations may be too large for some of the younger elementary schoolchildren because they are more likely to have lower energy needs than older elementary schoolchildren being served the same lunches. Managing the food costs associated with implementing the new lunch requirements was another challenge reported by all of the SFAs we visited. In all eight SFAs, fruit and vegetable expenditures increased substantially during school year 2012-2013, as compared to school year 2011-2012, consistent with the new requirements that both fruits and vegetables be offered daily with lunches and each student take at least one fruit or vegetable with lunch. However, increases varied among the SFAs we visited. Several factors likely affected the extent to which these costs increased in SFAs nationwide as they implemented the new requirements, including the availability of produce suppliers, economies of scale, and the amount of fruits and vegetables previously served with lunches. Some SFA officials we spoke with also noted that fruit and vegetable costs may vary greatly from year to year because of factors that are difficult to plan for when budgeting, such as the weather’s impact on growing seasons. By the end of school year 2012-2013, increased fruit and vegetable costs and other factors had negatively impacted the overall financial health of six SFAs we visited. The SFAs we visited also cited difficulties planning menus that complied with the new requirements, including the portion size and calorie range requirements. All eight SFAs modified or eliminated some popular menu items because of the new portion size requirements for meats and grains. For example, two districts stopped serving peanut butter and jelly sandwiches as a daily option in elementary schools, and three districts reported that they changed the burgers they served. Specifically, one district removed cheeseburgers from elementary and middle school lunch menus because adding cheese to the district’s burger patties would have made it difficult to stay within the weekly meat maximums. Because lunch entrees frequently consist of meats and grains and provide the majority of calories in meals, the limits on meats and grains made it difficult for SFAs to plan lunches that complied with both the portion size and calorie range requirements. For example, in order to meet the minimum calorie requirements, some SFAs reported that they added foods to their menus that generally did not improve the nutritional value of lunches, such as pudding or potato chips. Further, students or school officials in five districts raised concerns about students being hungry after eating lunches that complied with the new requirements, which some of the students we spoke with attributed to the smaller entrée sizes. The calorie range requirements caused additional difficulties in SFAs whose districts included schools with students in both the 6-8 and 9-12 grade groups. These SFAs faced challenges planning menus that met the requirements for both groups because the calorie ranges for lunches One SFA we visited planned its served to those groups do not overlap.menus for schools serving students in both groups to generally provide a calorie total in-between the two ranges, which is not in compliance with requirements and may have left older students feeling hungry after lunch. USDA temporarily provided SFAs some flexibility to help address these challenges, and we recommended in our June 2013 testimony that the department take additional steps. Specifically, in response to feedback from states and SFAs regarding operational challenges caused by the meat and grain maximums, USDA lifted the maximums temporarily—first in December 2012 for school year 2012-2013 and then in February 2013 for school year 2013-2014. USDA indicated that they provided these flexibilities in response to the challenges they had heard about, and they did not see a problem making the temporary changes because the new lunch content standards include other requirements that also limit portion sizes. In our June 2013 testimony, we recommended that USDA permanently remove the weekly meat and grain maximums, and in January 2014, USDA issued regulations that remove the requirement for SFAs to comply with these maximums. We also recommended in our testimony that USDA provide flexibility to help SFAs comply with the lack of overlap in the calorie ranges for the 6-8 and 9-12 grade groups. While USDA generally agreed with the recommendation, the department has not yet taken action to address that issue. SFAs we visited also discussed other challenges they faced planning lunch menus that complied with the new requirements in school year 2012-2013. For example, officials from five SFAs noted that the requirements sometimes led them to serve meals they would not otherwise have planned because the specific food combinations are generally not served together as a meal. For example, one SFA served saltine crackers and croutons with certain salads to meet the minimum daily grain requirement and a cheese stick with shrimp to meet the minimum daily meat requirement. Several SFA directors and school food service managers also noted that the new requirements made it very difficult to make substitutions if they ran out of a particular menu item, because serving alternative food items in one day’s lunch may result in the week’s lunches exceeding the meat or grain limits or failing to include vegetables from all five sub-groups. Another factor that complicated school year 2012-2013 menu planning in the SFAs we visited was food procurement. Three of the SFAs we visited noted that because food orders for school year 2012-2013 were placed in the initial months of 2012—at the same time that guidance on the new requirements was being issued—they procured foods without knowing what was required. Consequently, one SFA ordered more meat and poultry than was needed for the year, and another SFA inadvertently ordered foods that were not in compliance with the new requirements. Several SFA officials in districts we visited also mentioned that it was difficult to obtain from vendors certain food products that met the new requirements. For example, one SFA had difficulties obtaining the fresh produce it wanted to serve with lunches because of the increase in volume needed to comply with the new requirements. Further, three SFAs had challenges obtaining grain options from vendors that met portion size or whole grain requirements and were palatable to students. In three of the SFAs we visited, staff were still working with vendors during the school year to obtain food products needed to comply with requirements. SFA officials in some of the districts we visited and representatives from a group of food manufacturers and related industries we spoke with indicated that they had too little time between issuance of the final regulations and required implementation to reformulate food products to comply with the new lunch requirements. Further, after some products were reformulated, the temporary flexibilities that USDA provided for the meat and grain portion size requirements left industry experiencing difficulties forecasting demand, which led to food production, inventory, and storage challenges. In one school we visited, food service staff experienced related challenges at the end of school year 2012-2013, as several items on the school’s lunch menu were no longer being produced by vendors who were waiting for more certainty from USDA on the meat and grain requirements. According to SFA staff in all eight districts we visited, the workload for food service staff increased because of the new lunch requirements, and officials in some of the SFAs also noted that the requirements created new food storage and kitchen equipment challenges. School food service staff in all eight districts noted that workload increased primarily because of the need to prepare more fruits and vegetables each day to meet requirements. (See fig. 9.) In two of the smallest districts we visited, the increased workload in this area required staff reorganizations in which staff previously responsible for baking began helping to prepare fruits and vegetables. Staff in one SFA noted that the increased amount of time and effort to prepare fruits and vegetables also led to morale issues when staff saw students throw the fruits and vegetables in the trash. Further, two SFAs that chose to increase their use of fresh produce in lunches, rather than relying on canned or frozen products, reported that this required more frequent deliveries of these foods because of limited food storage capacity on-site in the schools. In one of these SFAs, the more frequent deliveries resulted in increased costs from the supplier, and in the other, they seemed to increase the likelihood of workplace injuries related to unloading and lifting. In addition to the need for more food storage space in schools, some SFAs we visited discussed new kitchen equipment needs that resulted from the changes to the lunch requirements, such as the need for new spoons and ladles to match the new portion size requirements, and food choppers and other equipment used for preparing fruits and vegetables to be served. As SFAs, food service staff, and students adjust to the new lunch requirements, it is likely that some challenges that arose during implementation of the new requirements in school year 2012-2013 will become less problematic. For example, fewer states reported that they expect menu planning, including the required portion sizes for lunch components and calorie ranges for lunches, to challenge their SFAs in school year 2013-2014 than reported these areas as challenges in school year 2012-2013. Although many states expect these areas to continue to be challenging, the flexibilities USDA recently made permanent related to the meat and grain portion size requirements should help ease menu planning moving forward. As more time elapses, food manufacturers expect the availability of foods that comply with the new requirements and are palatable to students to increase, easing SFA challenges with food procurement and plate waste. Food manufacturers we spoke with reported that they spent school year 2012-2013 focused primarily on reformulating food products to comply with the new lunch requirements. They added that because of the short timeframes between the issuance of the requirements for lunches and implementation, they did not have as much time to focus on food palatability, but as time elapses, they may have more time to do so. Further, both research and the experiences of some of our site visit districts suggest that students will likely adjust to the new lunch menus with time, which should result in decreased plate waste. While many states expect managing plate waste and food procurement to challenge their SFAs in school year 2013-2014, a greater number of states reported these areas as challenges for their SFAs in school year 2012-2013. In contrast, other areas may continue to be challenges in the future, including those related to costs and infrastructure. In our survey, a similar number of states reported that they expect their SFAs will be challenged by food costs and food storage or kitchen equipment in school year 2013- 2014 as were challenged by those areas in school year 2012-2013. Some of the SFAs we visited also suggested that these areas will likely be ongoing challenges. To try to remedy storage challenges, one SFA we visited had developed plans to expand coolers and freezers on-site in schools, which was not an option in another SFA we visited due to facility and resource constraints. A third SFA’s plans to remodel the kitchen and serving lines in its largest school were put on hold because of the negative financial impact the SFA experienced in part because of the new lunch requirements. Overall, future costs were particularly concerning to some of the SFAs we visited. For example, officials in four SFAs expressed concerns about remaining financially solvent after the new requirements for school breakfasts or competitive foods are implemented, as some expect the breakfast changes to increase costs and the competitive foods changes to decrease revenues. Moving forward, states and SFAs also expressed concerns about the federally-required sodium limits for school lunches. The first of three sodium limits must be met beginning in school year 2014-2015. Many of the SFAs we visited noted that these limits will likely present a significant menu planning challenge, primarily because many of the foods available from manufacturers do not yet comply with these limits and students may not accept foods that meet the limits. These concerns were echoed by officials we spoke with in four states, with some noting that it will be very difficult for food manufacturers to make foods that meet the limits and are palatable to students. USDA has acknowledged that complying with the new limits will be a significant challenge that will require new technology and food products, and they have explained that these issues were considered when the department decided to require sodium to be reduced gradually over 10 years. During our site visits to eight SFAs, many also expressed concerns about the future nutrition standards for competitive foods, and these concerns were not fully addressed in USDA’s interim final rule on the standards. At the time of our visits, SFAs expressed concerns that certain aspects of USDA’s proposed rule on the standards would be challenging to implement, if finalized. For example, officials from seven of the eight SFAs we visited expressed concerns about what they viewed as a lack of clarity regarding how the nutrition standards for competitive food sales administered by entities other than the SFA would be enforced. Officials from five of the SFAs we visited also expressed concerns about the provision that would allow states discretion to exempt certain fundraisers from the standards, because such exemptions may result in inequitable treatment and put the SFA at a competitive disadvantage relative to other food sales within a school. USDA’s interim final rule on the competitive food standards, issued in June 2013, requires school districts and SFAs to maintain records documenting compliance with the competitive food standards, and indicates that states and school districts will be responsible for ensuring compliance. However, it notes that forthcoming rules will describe state oversight requirements and fines for noncompliance. In addition, although USDA received many comments requesting that the department approve state decisions on fundraiser exemptions, the interim final rule does not require USDA approval of state decisions. USDA provided a substantial amount of guidance and training to assist states and SFAs in complying with the required changes to school lunch, which states indicated was useful. According to USDA officials, the department’s assistance effort has been unprecedented. From January 2011—the month after the Healthy, Hunger-Free Kids Act of 2010 was enacted—through September 2013, USDA issued about 90 memos to provide guidance to states and SFAs on the new requirements for the content of school lunches and paid lunch equity. (See fig. 10.) Most of the memos (85 percent) addressed the new requirements for lunch content and nutrition standards, as well as related issues such as food procurement and state review of SFA compliance with the lunch requirements. The remaining 15 percent of the memos addressed the paid lunch equity requirements. Over the past few years, USDA also provided training through several venues to help states and SFAs implement the changes. For example, USDA officials convened webinars and in-person trainings for states, participated in webinars and national conferences for SFAs, and worked with the National Food Service Management Institute to provide additional training and resources. USDA’s regional offices also provided training to states. In addition, as the changes were implemented in school year 2012-2013, USDA officials reported that they conducted an extensive amount of travel to visit school districts around the country to see how their efforts to implement the changes were progressing and to obtain feedback on additional assistance needed. All states reported that USDA’s guidance and training were useful as the new school lunch requirements were implemented. Further, over half of the states reported that USDA’s guidance was very useful or extremely useful, and officials from seven of the eight states we interviewed as part of our site visits expressed appreciation for USDA’s efforts to respond to issues that arose as changes were implemented. In contrast, some states and SFAs noted that the relatively short timeframes within which the lunch requirements were implemented made it difficult to keep up with the extensive amount of guidance provided by USDA. In the 18 months from January 2012—the month in which the final rule on the changes to the lunch content and nutrition standards was issued—through the end of school year 2012-2013, USDA issued 1,800 pages of guidance on these changes. Several SFAs we visited noted that keeping up with the extensive amount of guidance was difficult during school year 2012-2013 because they were simultaneously implementing the lunch changes. In addition, 32 states reported through our survey that the timing of USDA’s guidance on the new lunch requirements was a very —a great challenge or extreme challenge during school year 2012-2013response echoed by most of the states we spoke with as part of our eight site visits. For example, officials in one state reported that the guidance providing SFAs with flexibility on the meat and grain maximums was provided too late in school year 2012-2013 to be helpful, as SFAs had already planned menus and trained food service staff on the new meat and grain requirements. Further, because of the fast pace with which USDA provided guidance on the new lunch requirements, officials from four states said that the department’s regional offices were sometimes unable to answer state questions on the guidance. While six of the eight states we spoke with as part of our site visits commended the efforts that the regional offices took to help states understand the new lunch requirements, some noted that regional offices learned about the requirements at the same time as states. Because of this, regional office staff were not always able to answer state questions on the guidance, and states had to instead wait for USDA headquarters’ staff to respond. USDA officials told us that while they recognize that the lunch changes were defined and implemented rather quickly, this was necessary because of the importance of improving school meals. Almost two-thirds of states also reported through our survey that the changes USDA made to its guidance on the lunch requirements were a very great challenge or extreme challenge during school year 2012-2013. According to our analysis, 40 percent of the guidance memos issued by USDA on the new requirements for the content of school lunches and paid lunch equity from January 2011 through September 2013 contained new flexibilities not included in federal regulations or substantive changes to previously issued guidance, which were to be enforced either temporarily or permanently. (See fig. 11.) According to USDA’s general counsel, the department felt that it was important to provide such flexibilities to help ease the implementation process, although the guidance is technically non-binding and does not modify statutory or regulatory requirements. For example, USDA issued several guidance memos from February through December 2012 that added flexibilities related to the fruit, milk, meat, and grain components of lunches, which had not been included in the January 2012 final regulations. Further, some guidance memos either substantively changed or contradicted aspects of previously issued memos. For example, in a February 2012 guidance memo, USDA indicated that frozen fruit served with lunches was not allowed to contain added sugar after school year 2012-2013.However, in memos issued in September 2012 and June 2013, the department indicated that fruit with added sugar would be allowed in school years 2013-2014 and 2014-2015, respectively.SFA officials we spoke with noted that some of these changes were likely made by USDA to respond to problems SFAs were having implementing the new lunch requirements, the guidance changes were difficult to keep up with and led to increased confusion about the requirements. Further, officials in six of the states we interviewed as part of our site visits reported that the changes USDA made to its guidance also frustrated SFAs or complicated training on the new lunch requirements. Officials from three states we spoke with as part of our site visits also reported that changes might have been avoided if USDA had piloted or phased in the new requirements more slowly, which suggests that the challenges states and SFAs experienced because of the lack of timely and consistent guidance from USDA in school year 2012-2013 may become less problematic over time. USDA officials told us that their assistance efforts, and the changes made to guidance, reflected the department’s recognition that the process needed to be iterative, as unexpected issues with the requirements would likely arise as the new lunch standards were implemented in the schools. While SFAs transition to the new lunch requirements, USDA has emphasized the importance of state assistance in helping SFAs comply. According to the Standards for Internal Control in the Federal Government, federal agencies should have policies and practices in place to provide reasonable assurance that programs are operated in compliance with applicable laws and regulations. To this end, USDA officials told us that they directed states to work with SFAs to achieve compliance with the new lunch requirements during school year 2012- 2013. Nationwide, 45 states reported that they used the additional administrative funds they received for fiscal year 2013 to conduct training for SFAs and provide technical assistance to SFAs. In addition, many states reported using these funds to certify SFA compliance with the new meal requirements and conduct required validation reviews of a sample of those certified. Officials in all eight of the states we spoke with as part of our site visits reported that they provided extensive guidance and assistance to SFAs to help them understand and implement the new lunch requirements and become certified as in compliance with the requirements. Although SFAs likely needed and benefited from state assistance as they worked to implement the new lunch requirements, USDA’s emphasis on assistance, combined with new financial incentives for compliance, may have led to incomplete identification and documentation of SFA noncompliance. For many years, states have conducted administrative reviews of SFAs and observed lunches in schools in order to assess SFA compliance with federal requirements. In the past, USDA consistently noted the importance of these reviews for ensuring the integrity of the National School Lunch Program, as the review process requires that noncompliance be addressed. Under this process, instances of SFA noncompliance are required to be documented and lead to a corrective action plan and follow-up to ensure issues are addressed. In addition, the documentation of issues has also provided federal and state officials with information on areas for which additional assistance may be needed across SFAs. During school year 2012-2013, however, states were generally not required to conduct administrative reviews. Rather, states were to assist SFA efforts to comply with the new lunch requirements. Further, states were required to provide SFAs certified as in compliance with the new requirements an extra 6 cents of federal reimbursement for each lunch served and conduct on-site validation reviews in a sample of the certified SFAs. USDA officials reported that they considered the added funds to be an important way to offset the extra costs SFAs incurred as they made the required changes to lunches. However, while the certification and validation process likely helped SFAs understand the new requirements and obtain the additional reimbursement to help with compliance, unlike the administrative review process, states were not required to fully document issues of noncompliance they identified. For example, officials in two states we spoke with noted that USDA instructed them to work with SFAs during the certification process and during validation reviews of those certified, rather than strictly enforce requirements. This is consistent with USDA’s guidance, which emphasized the provision of assistance during the certification process and validation reviews to help SFAs become compliant. However, because states were generally not required to document noncompliance issues that arose during the certification process or validation reviews, SFAs may not have developed or documented corrective action plans taken to address them. As a result, SFAs may not have adequate information on the types of ongoing compliance issues and the need to take corrective actions. Moreover, USDA has limited information on the extent to which SFAs are facing similar difficulties complying with the new requirements, which could be the focus of future federal technical assistance efforts. National data, as well as our conversations with states and visits to schools, suggest that some instances of SFA noncompliance may not have been fully documented while the new lunch requirements were being implemented. We and others have reported that SFAs experienced significant challenges implementing the new lunch requirements during school year 2012-2013, and several state officials we spoke with as part of our site visits told us that SFAs often needed a lot of state assistance to move forward with the new requirements, also suggesting that SFAs faced significant challenges fully complying. However, national data suggest that these challenges affected few SFAs’ certifications. Specifically, 82 percent of SFAs nationwide applied to be certified as in compliance with the new lunch requirements during school year 2012- 2013,National data on the results of state validation reviews of SFAs show similar outcomes, as 1 percent of SFAs had the extra federal reimbursement stopped by their states because of noncompliance issues found during these reviews. When reviewing the certification and validation results by state, we found that 25 states did not deny any SFAs that applied for certification and validated all SFAs reviewed. While states reported that they were unable to validate compliance in an additional 4 percent of SFAs reviewed nationwide, states did not stop these SFAs from continuing to receive the extra federal reimbursement, possibly and states denied 1 percent of SFAs that applied. (See fig. 12.) Because the certification and validation process did not require states to document issues of noncompliance, the extent to which noncompliance issues occurred in SFAs is unknown. because of changes in USDA guidance during school year 2012-2013. One state told us that it interpreted USDA’s guidance to mean that the state should not stop an SFA from receiving the extra federal reimbursement even when it was unclear if issues of noncompliance found during a validation review would be fully addressed by the SFA. In another state, we obtained evidence that some of the lunch menus in an SFA we visited may not have been fully in compliance with the new requirements, though the SFA was certified and validated, and in two additional certified SFAs we visited, we observed practices in schools that were inconsistent with the new requirements.SFAs we visited had been certified to be in compliance with the new requirements at the time of our site visits, the possible noncompliance issues in three of these five SFAs indicate the difficulty of ensuring proper implementation of the lunch changes in all of an SFA’s schools, particularly during the first year that changes were required. The Healthy, Hunger-Free Kids Act of 2010 required that nutrition standards for school lunches be updated to help reduce childhood obesity and improve children’s diets, and evidence suggests that lunches served are now better designed to meet those goals. Although some decreases in student lunch participation and challenges for SFAs occurred in the first year that the lunch changes were implemented, these outcomes are likely related to the substantial scale of the changes and the short time in which they were implemented. As a result, both participation and many of the challenges SFAs faced initially are expected to improve with the passage of time as students and SFAs adjust to the new lunch requirements. Since the act was passed, USDA has focused on the provision of assistance to help SFAs comply with the new requirements. While this emphasis may have been needed given the scope of the changes and the short timeframe for implementation, it alone will not ensure that students nationwide are being served healthier school lunches. Rather, only when government assistance is combined with an emphasis on program integrity will it be possible to ensure that healthier school lunches are served nationwide. The administrative review process has long been key to both addressing issues of noncompliance in the National School Lunch Program and ensuring that federal and state governments have information on areas for which they should target additional assistance to SFAs to improve program compliance. However, even with this process in place, program errors resulting from lunches that did not comply with requirements being served to students have been a long-standing issue. The substantial changes to the lunch requirements, combined with the delays and changes made in federal guidance while the requirements were being implemented, as well as temporary changes to the process through which states reviewed SFA compliance, increase the likelihood that lunches served to students may not meet all of the requirements. In addition, while USDA has developed a new administrative review process, which includes new requirements related to SFA financial management, the review process will not be effective without state understanding of all of the requirements they are responsible for overseeing. Further, because the Healthy, Hunger-Free Kids Act of 2010 included two new provisions that relate to SFA financial management— those addressing paid lunch prices and revenue from foods sold outside of the school meals programs—without effective oversight of SFA financial management by states, neither states nor the federal government will have assurance that SFAs are correctly implementing these requirements. As new requirements are added this year and in the future for the School Breakfast Program and competitive foods, timely and consistent USDA guidance combined with an effective administrative review process are all vital to ensuring successful implementation of the changes to school food and achieving the laudable goal of improving schoolchildren’s diets and health. To improve program integrity, as USDA moves forward with its new administrative review process, we recommend that the Secretary of Agriculture direct the Administrator for the Food and Nutrition Service to take the following actions: clarify to states the importance of documenting compliance issues found during administrative reviews and requiring corrective actions to address them, and continue efforts to systematically assess all states’ needs for information to improve their ability to oversee SFA financial management and provide assistance to meet identified needs. We provided a draft of this report to USDA for review and comment. In oral comments, the Senior Policy Advisor to the Deputy Administrator for Child Nutrition Programs and other USDA officials generally agreed with our recommendations. These officials also noted that they consider the emphasis on technical assistance associated with school meals implementation in school year 2012-2013 appropriate given that the new meal patterns represented a major transition for local program operators. They also indicated their belief that the level of review associated with the 6-cent certification process, including detailed review of meal pattern documentation and on-site reviews of at least 25 percent of certified SFAs, provides a solid foundation for ongoing oversight of compliance moving forward. Officials also expressed their belief that the new administrative review process, which they developed in collaboration with states, is an effective and efficient monitoring process that will improve program integrity. Further, they noted that their ongoing efforts to assist state efforts to properly implement the new process will help ensure that states are able to effectively review all required areas, including SFA financial management. We agree that the new administrative review process, if properly implemented, could improve program integrity, and as we discuss in the report, we agree that the emphasis on technical assistance likely benefited SFAs as they transitioned to the new lunch requirements. However, we continue to believe that the changes made in oversight requirements during school year 2012-2013 may have left USDA without key information on compliance issues SFAs faced when implementing the changes and may have created confusion among states as to the importance of consistently documenting noncompliance for program integrity. While we also remain concerned that the change in oversight requirements during school year 2012-2013 and the department’s continued emphasis on state assistance to SFAs moving forward may inadvertently undercut the effectiveness of the new review process, we see opportunities for USDA to address these issues. Specifically, as USDA continues its efforts to communicate and collaborate with states during their implementation of the new review process, the department is well-positioned to emphasize the importance of documenting noncompliance for effective program oversight and to provide states with the information they need to effectively review all required areas. USDA officials also provided technical comments, which we incorporated into the report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Agriculture, and other interested parties. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-7215 or brownke@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To assess trends in school lunch participation, we analyzed USDA’s national data on meals served in the National School Lunch Program from school year 2000-2001 through school year 2012-2013. Each month, states report to USDA on the FNS-10 form the number of lunches served by category of student—free, reduced-price, and paid—as well as average daily lunches served to all students. These data are used to determine federal reimbursement payments to states. Additionally, in October of each school year, states report to USDA the total number of students enrolled in schools with the National School Lunch Program, as well as the total number of students approved for free and reduced-price meals in that month. Subtracting these students from the total enrolled students provides the number of students required to pay full price for their meals, if they choose to buy them, in schools with the National School Lunch Program. Although USDA does not collect additional data on the number of students participating in the program each month, the department uses the lunch data it collects to determine the number of students participating in the program. Specifically, USDA adjusts the data on average daily lunches served each month upward to help account for students who participated in the program for a number of days less than all days in the month. To make this adjustment, USDA uses an estimate of the proportion of students that attend schools daily nationwide. To analyze participation in the National School Lunch Program, we reviewed USDA’s data on meals served and students enrolled, as well as the department’s methodology for determining student participation, and determined these data and the method to be sufficiently reliable for the purposes of this report. Specifically, we interviewed USDA officials to gather information on the processes they use to ensure the completeness and accuracy of the school lunch data, reviewed related documentation, and compared the data we received from the department to its published data. To determine school year participation from these data, we relied on 9 months of data—September through May—for each year. To determine the participation rate, we divided the number of students participating per month by the total number of students enrolled in schools with the program. We followed the same approach to determine the participation rates for students receiving free and reduced-price lunches, as well as those who paid full price for their lunches. To understand the scale and scope of assistance USDA has provided to states and SFAs, we analyzed guidance memos USDA issued from January 2011—the month after the Healthy, Hunger-Free Kids Act was enacted—through September 2013. We reviewed all guidance memos issued to states during this time period and further analyzed those that provided guidance addressing the new requirements for the content of school lunches, including related issues such as food procurement and state review of SFA compliance with the lunch content requirements, as well as those addressing the paid lunch equity requirements. These memos included the department’s policy and technical assistance memos, as well as other relevant guidance memos that were not designated in one of those categories. For guidance memos that were released in multiple versions, we considered each version to be a separate piece of guidance.their primary topic and analyzed their content to determine whether they clarified regulations, provided new flexibilities related to requirements included in federal regulations, or substantively changed previously- issued guidance. We also assessed the number of pages included in each document, defined as the number of digital pages for each guidance We categorized the guidance memos by document, including attachments. In the case of spreadsheet files, we counted each worksheet within the file as a single page. We did not conduct an independent legal analysis of these guidance memos. To obtain information on state efforts related to implementation of the new school lunch content and nutrition requirements, we conducted a national survey of state child nutrition directors who oversee the National School Lunch Program in the 50 states and the District of Columbia. We administered our Web-based survey between June and July 2013, and all state directors responded. The survey included questions about SFA challenges with the new lunch requirements, state use of administrative funds, and USDA assistance to states. The survey also requested data on SFAs and schools participating in the program, SFAs that left the program in school year 2012-2013, and state certification and validation of SFAs in compliance with the new requirements. Because separate agencies oversee the administration of the National School Lunch Program in public and private schools in five states, we surveyed both agencies in each of these states. These five states are Arkansas, Colorado, Georgia, Oklahoma, and Virginia. In two of these states, separate state agencies oversee public and private schools administering the program, while in the remaining three, private schools administering the program are overseen by the relevant FNS regional office. For these five states, when analyzing survey results for questions with numerical responses, we combined the answers from entities overseeing both public and private schools in order to represent the state as a whole. For all other questions, such as those concerning SFA challenges with the lunch requirements, we reported responses from the agency overseeing the program in public schools because those agencies represented the majority of schools with the program in these states. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce nonsampling errors, such as variations in how respondents interpret questions and their willingness to offer accurate responses. We took steps to minimize nonsampling errors, including pretesting draft instruments and using a Web-based administration system. Specifically, during survey development, we pretested draft instruments with child nutrition directors from three states (Louisiana, Texas, and Virginia) in May 2013. We selected the pretest states to provide variation in state school lunch program characteristics and geographic location. In the pretests, we were generally interested in the clarity, precision, and objectivity of the questions, as well as the flow and layout of the survey. For example, we wanted to ensure definitions used in the surveys were clear and known to the respondents, categories provided in closed-ended questions were complete and exclusive, and the ordering of survey sections and the questions within each section were appropriate. We revised the final survey based on pretest results. Another step we took to minimize nonsampling errors was using a Web-based survey. Allowing respondents to enter their responses directly into an electronic instrument created a record for each respondent in a data file and eliminated the need for and the errors associated with a manual data entry process. To further minimize errors, programs used to analyze the survey data were independently verified to ensure the accuracy of this work. While we did not fully validate specific information that states reported through our survey, we took several steps to ensure that the information was sufficiently reliable for the purposes of this report. For example, we reviewed the responses and identified those that required further clarification and, subsequently, solicited follow-up information from those respondents via email and phone to ensure the information they provided was reasonable and reliable. In our review of the data, we also identified and logically fixed skip pattern errors for questions that respondents should have skipped but did not. On the basis of these checks, we believe our survey data are sufficiently reliable for the purposes of our work. To gather information from the local level on implementation of the new lunch content and nutrition requirements, we conducted site visits to eight school districts across the country between March and May 2013. The school districts we visited were: Caddo Parish Public Schools (LA), Carlisle Area School District (PA), Chicago Public Schools (IL), Coeur d’Alene School District (ID), Fairfax County Public Schools (VA), Irving Independent School District (TX), Mukwonago Area School District (WI), and Spokane Public Schools (WA). We selected these school districts because they provided variation across geographic location, district size, and certain characteristics of the student population and district food services. For example, the proportion of students eligible for free and reduced-price lunches and the racial and ethnic characteristics of the student population varied across the districts selected. Further, we selected districts with different food service approaches, including some that generally prepared school lunches in one central kitchen before delivering them to schools, some that prepared lunches in kitchens on- site in each school, and others that used alternative approaches for lunch preparation. Seven of the school districts we visited managed their own food service operations, while one district contracted with food service management companies. We relied on the U.S. Department of Education’s Common Core of Data, which provides information on public schools, to ensure selected districts met several of our criteria. As a result, all of the districts we selected for site visits were public, although non-profit private elementary and secondary schools, as well as residential child care institutions, also participate in the National School Lunch Program. In each district, to gather information on local level implementation of the new lunch requirements, we interviewed the SFA director, as well as other key district-level SFA staff and food service staff in at least two schools. During these interviews, we collected information about lunch participation trends; challenges, if any, implementing the new lunch requirements; and USDA and state assistance with the changes. To select the schools we visited in each district, we worked with the SFA director to ensure the schools included students of differing grade levels in order to capture any relevant differences in their reactions to the new lunch requirements. In each school, we observed lunch—including students’ food selections, consumption, and plate waste—and, when feasible, interviewed students and school staff to obtain their thoughts on the lunch changes. We also interviewed the eight state child nutrition program directors overseeing these districts to gather information on statewide lunch participation trends; SFA challenges, if any; and USDA and state assistance with implementation of the changes. Following the site visits, in late summer 2013, we obtained school lunch participation data for school years 2008-2009 through 2012-2013 and information about school year 2012-2013 finances from the eight SFA directors. We cannot generalize our findings from the site visits beyond the school districts we visited. Under the previous federal requirements for the content of school lunches, SFAs could choose to use one of five approved approaches to plan their menus. Three of these approaches focused on nutrient requirements and, aside from milk, did not specify food components or portion size requirements. Under the two remaining approaches, Traditional and Enhanced Food-Based Menu Planning, schools had to comply with specific food component and portion size requirements, as well as nutrient requirements. See tables 1 through 4 for details of the previous Food-Based Menu Planning approaches. Following passage of the Healthy, Hunger-Free Kids Act of 2010, USDA updated federal requirements for the content of school lunches and required all SFAs to use the same approach for planning lunch menus— Food-Based Menu Planning. In the January 2012 final rule on these changes, USDA noted that over 70 percent of program operators were already using the Food-Based Menu Planning approach to plan their lunch menus. See table 5 for details of the current lunch content and nutrition requirements. Kay E. Brown, (202) 512-7215 or brownke@gao.gov. In addition to the contact named above, Rachel Frisk and Kathy Larin (Assistant Directors), Robert Campbell, Jean McSween, Dan Meyer, Christine San, and Zachary Sivo made key contributions to this report. Also contributing to this report were James Bennett, Nora Boretti, Jessica Botsford, Sheila McCoy, Paul Schearf, and Almeta Spencer. School Lunch: Modifications Needed to Some of the New Nutrition Standards. GAO-13-708T. Washington, D.C.: June 27, 2013. School Meal Programs: More Systematic Development of Specifications Could Improve the Safety of Foods Purchased through USDA’s Commodity Program. GAO-11-376. Washington, D.C.: May 3, 2011. School Meal Programs: Improved Reviews, Federal Guidance, and Data Collection Needed to Address Counting and Claiming Errors. GAO-09-814. Washington, D.C.: September 9, 2009. School Meal Programs: Changes to Federal Agencies’ Procedures Could Reduce Risk of School Children Consuming Recalled Food. GAO-09-649. Washington, D.C.: August 20, 2009. School Meal Programs: Experiences of the States and Districts That Eliminated Reduced-price Fees. GAO-09-584. Washington, D.C.: July 17, 2009. Meal Counting and Claiming by Food Service Management Companies in the School Meal Programs. GAO-09-156R. Washington, D.C.: January 30, 2009. School Meals Programs: Competitive Foods Are Widely Available and Generate Substantial Revenues for Schools. GAO-05-563. Washington, D.C.: August 8, 2005. Nutrition Education: USDA Provides Services through Multiple Programs, but Stronger Linkages among Efforts Are Needed. GAO-04-528. Washington, D.C.: April 27, 2004. School Meals Programs: Competitive Foods Are Available in Many Schools; Actions Taken to Restrict Them Differ by State and Locality. GAO-04-673. Washington, D.C.: April 23, 2004. School Lunch Program: Efforts Needed to Improve Nutrition and Encourage Healthy Eating. GAO-03-506. Washington, D.C.: May 9, 2003.
The National School Lunch Program served more than 31 million children in fiscal year 2012, in part through $11.6 billion in federal supports. The Healthy, Hunger-Free Kids Act of 2010 required USDA to update nutrition standards for lunches. USDA issued new requirements for lunch components--fruits, vegetables, grains, meats, and milk--and for calories, sodium, and fats in meals. USDA oversees state administration of the program, and states oversee local SFAs, which provide the program in schools. The changes were generally required to be implemented in school year 2012-2013. GAO was asked to provide information on implementation of the lunch changes. GAO assessed (1) lunch participation trends, (2) challenges SFAs faced implementing the changes, if any, and (3) USDA's assistance with and oversight of the changes. To address these areas, GAO used several methods, including review of federal laws, regulations, and guidance; analysis of USDA's lunch participation data; a national survey of state child nutrition program directors; and site visits to eight school districts selected to provide variation in geographic location and certain school district and food service characteristics. Nationwide, student participation in the National School Lunch Program declined by 1.2 million students (or 3.7 percent) from school year 2010-2011 through school year 2012-2013, after having increased steadily for many years. This decrease was driven primarily by a decline of 1.6 million students eating school lunch who pay full price for meals, despite increases in students eating school lunch who receive free meals. State and local officials reported that the changes to lunch content and nutrition requirements, as well as other factors, influenced student participation. For example, almost all states reported through GAO's national survey that obtaining student acceptance of lunches that complied with the new requirements was challenging during school year 2012-2013, which likely affected participation in the program. Federal, state, and local officials reported that federally-required increases to lunch prices, which affected many districts, also likely influenced participation. School food authorities (SFA) faced several challenges implementing the new lunch content and nutrition requirements in school year 2012-2013. For example, most states reported that SFAs faced challenges with addressing plate waste--or foods thrown away rather than consumed by students--and managing food costs, as well as planning menus and obtaining foods that complied with portion size and calorie requirements. SFAs that GAO visited also cited these challenges. However, both states and SFAs reported that they expect many of these areas will become less challenging over time, with the exceptions of food costs, insufficient food storage and kitchen equipment, and the forthcoming limits on sodium in lunches. The U.S. Department of Agriculture (USDA) provided a substantial amount of guidance and training to help with implementation of the lunch changes and program oversight, but certain aspects of USDA's guidance may hinder state oversight of compliance. Starting in school year 2012-2013, USDA allowed states to focus their oversight of the lunch changes on providing technical assistance to SFAs rather than documenting instances of noncompliance and requiring corrective actions to address them. This assistance likely helped many SFAs move toward compliance with the new lunch requirements and become certified to receive increased federal reimbursements for lunches. However, evidence suggests this approach may have also resulted in some SFAs that were not fully meeting requirements being certified as in compliance. Without documentation of noncompliance and requirements for corrective actions, SFAs may not have the information needed to take actions to address these issues, and USDA may lack information on areas that are problematic across SFAs. Moving forward, USDA has been developing a new process for conducting program oversight, in part because of new statutory requirements. This new process adds requirements for reviewing SFA financial management, and many states reported a need for more guidance and training in this area. USDA has acknowledged that states' processes for reviewing this area have been inconsistent and sometimes inadequate in the past. While USDA has provided some assistance to states on the new requirements related to SFA financial management, until USDA has collected information from all states on their needs in this area, the department will not know if all states are fully prepared to oversee SFA financial management. To improve program integrity, GAO recommends that USDA clarify the need to document noncompliance issues found during state reviews of SFAs and complete efforts to assess states' assistance needs related to oversight of financial management. USDA generally agreed with GAO's recommendations.
FOIA, which was originally enacted in 1966 and subsequently amended several times, establishes a legal right of access to government records and information, on the basis of the principles of openness and accountability in government. Before the act, an individual seeking access to federal records had faced the burden of establishing a right to examine them. FOIA established a “right to know” standard for access, instead of a “need to know,” and shifted the burden of proof from the individual to the government agency seeking to deny access. The act has been amended several times, including in 1974, 1976, 1986, 1996, and 2002. FOIA provides the public with access to government information either through “affirmative agency disclosure”—publishing information in the Federal Register or making it available in reading rooms—or in response to public requests for disclosure. Public requests for disclosure of records are the best known type of FOIA disclosure. Any member of the public may request access to information held by federal agencies, without showing a need or reason for seeking the information. The act prescribes nine specific categories of information that is exempt from disclosure; agencies may cite these exemptions in denying access to material (see table 1). The act also includes provisions for excluding specific sensitive records held by law enforcement agencies. The act requires agencies to notify requesters of the reasons for any adverse determination and grants requesters the right to appeal agency decisions to deny access. In addition, agencies are required to meet certain time frames for making key determinations: whether to comply with requests (20 business days from receipt of the request), responses to appeals of adverse determinations (20 business days from filing of the appeal), and whether to provide expedited processing of requests (10 business days from receipt of the request). Congress did not establish a statutory deadline for making releasable records available, but instead required agencies to make them available promptly. We have reported several times in the past on the contents of the annual reports of 25 major agencies, covering fiscal years 1998 through 2002. We first reported information in 2001 on the implementation of the 1996 amendments to FOIA. At that time we recommended that Justice (1) encourage agencies to make material electronically available and (2) review agency annual reports to address specific data quality issues. Since our report was issued, Justice has taken steps to implement both of these recommendations. In 2002, we reported that the number of requests received and processed appeared for most agencies—except the Department of Veterans Affairs—to peak in fiscal year 2000 and decline slightly in fiscal year 2001. In our 2004 report, we reported that between 2000 and 2002, the number of requests received and processed declined when the Department of Veterans Affairs is excluded. We also reported that agencies’ backlogs of pending requests were declining, and that the number of FOIA requests denied governmentwide had dropped dramatically between 2000 and 2001 and remained low in 2002. The Department of Justice and the Office of Management and Budget (OMB) both have roles in the implementation of FOIA. The Department of Justice oversees agencies’ compliance with FOIA and is the primary source of policy guidance for agencies. OMB is responsible for issuing guidelines on the uniform schedule of fees. Specifically, Justice’s requirements under the act are to ● make agencies’ annual FOIA reports available through a single electronic access point and notify Congress as to their availability; in consultation with OMB, develop guidelines for the required annual agency reports, so that all reports use common terminology and follow a similar format; and ● submit an annual report on FOIA statistics and the efforts undertaken by Justice to encourage agency compliance. Within the Department of Justice, the Office of Information and Privacy (OIP) has lead responsibility for providing guidance and support to federal agencies on FOIA issues. OIP first issued guidelines for agency preparation and submission of annual reports in the spring of 1997 and periodically issued additional guidance. OIP also periodically issues guidance on compliance, provides training, and maintains a counselors service to provide expert, one- on-one assistance to agency FOIA staff. Further, it also makes a variety of FOIA and Privacy Act resources available to agencies and the public via the Justice Web site and on-line bulletins. In addition, the act requires OMB to issue guidelines to “provide for a uniform schedule of fees for all agencies.” In charging fees for responding to requests, agencies are required to conform to the OMB guidelines. Further, in 1987, the Department of Justice issued guidelines on waiving fees when requests are determined to be in the public interest. Under the guidelines, requests for waivers or reduction of fees are to be considered on a case-by-case basis, taking into account both the public interest and the requester’s commercial interests. The 1996 FOIA amendments, referred to as e-FOIA, require that agencies submit a report to the Attorney General on or before February 1 of each year that covers the preceding fiscal year and includes information about agencies’ FOIA operations. The following are examples of information that is to be included in these reports: ● number of requests received, processed, and pending; ● median number of days taken by the agency to process different ● determinations made by the agency not to disclose information and the reasons for not disclosing the information; ● disposition of administrative appeals by requesters; information on the costs associated with handling of FOIA requests; and ● full-time-equivalent staffing information. In addition to providing their annual reports to the Attorney General, agencies are to make them available to the public in electronic form. The Attorney General is required to make all agency reports available on line at a single electronic access point and report to Congress no later than April 1 of each year that these reports are available in electronic form. As agencies process FOIA requests, they generally place them in one of four possible disposition categories: grants, partial grants, denials, and “not disclosed for other reasons.” These categories are defined as follows: ● Grants: agency decisions to disclose all requested records in full. ● Partial grants: decisions to withhold some records in whole or in part, because such information was determined to fall within one or more exemptions. ● Denials: agency decisions not to release any part of the requested records because all information in the records is determined to be exempt under one or more statutory exemptions. ● Not disclosed for other reasons: agency decisions not to release requested information for any of a variety of reasons other than statutory exemptions from disclosing records. The categories and definitions of these “other” reasons for nondisclosure are shown in table 2. When a FOIA request is denied in full or in part, or the requested records are not disclosed for other reasons, the requester is entitled to be told the reason for the denial, to appeal the denial, and to challenge it in court. FOIA also authorizes agencies to recoup certain direct costs associated with processing requests, and agencies also have the discretion to reduce or waive fees under various circumstances. Agency determinations about fees and fee waivers are complex decisions that include determining (1) a requester’s fee category, (2) whether a fee waiver is to be granted, and (3) the actual fees to be charged. FOIA stipulates three types of fee categories for requesters: (1) commercial; (2) educational or noncommercial scientific institutions and representatives of the news media; and (3) other. Further, fees can be charged for three types of FOIA-related activities—search, duplication, and review—depending on the requester’s fee category. In addition, fees may not be charged to a requester in certain situations, such as when a fee waiver is granted or when the applicable fees are below a certain threshold. Commercial users can be charged for the broadest range of FOIA- related activities, including document search, review, and duplication. Commercial use is defined in the OMB fee schedule guidelines as “a use or purpose that furthers the commercial, trade or profit interests of the requester or the person on whose behalf the request is being made.” The second category exempts search and review fees for documents sought for noncommercial use by educational or noncommercial scientific institutions, and for documents sought by representatives of the news media. The third category of fees, which applies to all requesters who do not fall within either of the other two categories, allows for “reasonable” charges for document search and duplication. Table 3 shows the FOIA-related activities for which agencies can charge by fee category, as stipulated in the act. Although the act generally requires that requesters pay fees to cover the costs of processing their requests, in certain circumstances, fees are not to be charged. For example, as stipulated in the act, fees may not be charged when the government’s cost of collecting and processing the fee is likely to equal or exceed the amount of the fee itself. Further, under certain circumstances, the act requires an agency to furnish documents without charge, or at reduced charges. This is commonly referred to as the FOIA fee-waiver provision. Based on this provision, an agency must provide a fee waiver if two conditions are met: ● disclosure of the requested information is in the public interest because it is likely to contribute significantly to public understanding of the operations or activities of the government, and ● disclosure of the information is not primarily in the commercial interest of the requester. Under the act and guidance, when these requirements are both satisfied, based upon information supplied by a requester or otherwise made known to the agency, the fee waiver or reduction is to be granted by the FOIA officer. When one or both of these requirements are not satisfied, a fee waiver is not warranted. As these criteria suggest, fee waivers are to be granted on a case-by- case basis. Individuals who receive fee waivers in some cases may not necessarily receive them in other cases. In addition to FOIA, the Privacy Act of 1974 includes provisions granting individuals the right to gain access to and correct information about themselves held by federal agencies. Thus the Privacy Act serves as a second major legal basis, in addition to FOIA, for the public to use in obtaining government information. The Privacy Act also places limitations on agencies’ collection, disclosure, and use of personal information. Although the two laws differ in scope, procedures in both FOIA and the Privacy Act permit individuals to seek access to records about themselves—known as “first-party” access. Depending on the individual circumstances, one law may allow broader access or more extensive procedural rights than the other, or access may be denied under one act and allowed under the other. Subsequently, the Department of Justice’s Office of Information and Privacy (OIP) issued guidance that it is “good policy for agencies to treat all first- party access requests as FOIA requests (as well as possibly Privacy Act requests), regardless of whether the FOIA is cited in a requester’s letter.” This guidance was intended to help ensure that requesters receive the fullest possible response to their inquiries, regardless of which law they cite. For more information about FOIA and the Privacy Act, see appendix I. Although the specific details of processes for handling FOIA requests vary among agencies, the major steps in handling a request are similar across the government. Agencies receive requests, usually in writing (although they may accept requests by telephone or electronically), which can come from any organization or member of the public. Once received, the request goes through several phases, which include initial processing, searching for and retrieving responsive records, preparing responsive records for release, approving the release of the records, and releasing the records to the requester. Figure 1 is an overview of the process, from the receipt of a request to the release of records. During the initial processing phase, a request is logged into the agency’s FOIA system, and a case file is started. The request is then reviewed to determine its scope, estimate fees, and provide an initial response to the requester. After this point, the FOIA staff begins its search to retrieve responsive records. This step may include searching for records from multiple locations and program offices. After potentially responsive records are located, the documents are reviewed to ensure that they are within the scope of the request. During the next two phases, the agency ensures that appropriate information is to be released under the provisions of the act. First, the agency reviews the responsive records to make any redactions based on the statutory exemptions. Once the exemption review is complete, the final set of responsive records is turned over to the FOIA office, which calculates appropriate fees, if applicable. Before release, the redacted responsive records are then given a final review, possibly by the agency’s general counsel, and then a response letter is generated, summarizing the agency’s actions regarding the request. Finally, the responsive records are released to the requester. Some requests are relatively simple to process, such as requests for specific pieces of information that the requester sends directly to the appropriate office. Other requests may require more extensive processing, depending on their complexity, the volume of information involved, the need for the agency FOIA office to work with offices that have relevant subject-matter expertise to find and obtain information, the need for a FOIA officer to review and redact information in the responsive material, the need to communicate with the requester about the scope of the request, and the need to communicate with the requester about the fees that will be charged for fulfilling the request (or whether fees will be waived). Specific details of agency processes for handling requests vary, depending on the agency’s organizational structure and the complexity of the requests received. While some agencies centralize processing in one main office, other agencies have separate FOIA offices for each agency component and field office. Agencies also vary in how they allow requests to be made. Depending on the agency, requesters can submit requests by telephone, fax, letter, or e-mail or through the Web. In addition, agencies may process requests in two ways, known as “multitrack” and “single track.” Multitrack processing involves dividing requests into two groups: (1) simple requests requiring relatively minimal review, which are placed in one processing track, and (2) more voluminous and complex requests, which are placed in another track. In contrast, single-track processing does not distinguish between simple and complex requests. With single-track processing, agencies process all requests on a first-in/first-out basis. Agencies can also process FOIA requests on an expedited basis when a requester has shown a compelling need or urgency for the information. Citizens have been requesting and receiving an ever-increasing amount of information from the federal government, as reflected in the increasing number of FOIA requests that have been received and processed in recent years. In fiscal year 2004, the 25 agencies we reviewed reported receiving and processing about 4 million requests, an increase of 25 percent compared to 2003. From 2002 to 2004, the number of requests received increased by 71 percent, and the number of requests processed increased by 68 percent. The 25 agencies we reviewed handle over 97 percent of FOIA requests governmentwide. They include the 24 major agencies covered by the Chief Financial Officers Act, as well as the Central Intelligence Agency and, beginning in 2003, the Department of Homeland Security (DHS) in place of the Federal Emergency Management Agency (FEMA). While the creation of DHS in fiscal year 2003 led to a shift in some FOIA requests from agencies affected by the creation of the new department, the same major component entities are reflected in all 3 years that we reviewed. For example, in 2002, before DHS was formed, FEMA independently reported on its FOIA requests, and its annual report is reflected in our analysis. However, beginning in 2003, FEMA became part of DHS, and thus its FOIA requests are reflected in DHS figures for 2003 and 2004. In recent years, Veterans Affairs (VA) has accounted for a large portion—about half—of governmentwide FOIA requests received and processed. This is because the agency includes in its totals the many first-party medical records requests that it processes. However, VA’s numbers have not driven the large increases in FOIA requests. In fact, in 2004, the agency had a decline in the number of requests received, processed, and pending compared to 2003. Thus, when VA is excluded from governmentwide FOIA request totals, the increase between 2003 and 2004 changes from 25 percent to 61 percent. Figure 2 shows total requests reported governmentwide for fiscal years 2002 through 2004, with VA’s share shown separately. In 2004, most dispositions of FOIA requests (92 percent) were reported to have been granted in full, as shown in table 4. Only relatively small numbers were partially granted (3 percent), denied (1 percent), or not disclosed for other reasons (5 percent). When VA is excluded from the totals, the percentages remain roughly comparable. Agencies other than VA that reported receiving large numbers of requests in fiscal year 2004 included the Social Security Administration (SSA), the Department of Health and Human Services (HHS), and the Department of Homeland Security (DHS), as shown in figure 3. Agencies other than VA, SSA, HHS, and DHS accounted for only 9 percent of all requests. Three of the four agencies that handled the largest numbers of requests—VA, SSA, and HHS—also granted the largest percentages of requests in full. However, as shown in figure 4, the numbers of fully granted requests varied widely among agencies in fiscal year 2004. For example, three agencies—State, the Central Intelligence Agency, and the National Science Foundation —made full grants of requested records in less than 20 percent of the cases they processed. Eight of the 25 agencies we reviewed made full grants of requested records in over 60 percent of their cases. This variance among agencies in the disposition of requests has been evident in prior years as well. In addition to processing greater numbers of requests, many agencies (13 of 25) also reported that their backlogs of pending requests—requests carried over from one year to the next—have increased since 2002. In 2002, pending requests governmentwide were reported to number about 140,000; whereas in 2004, about 160,000—14 percent more—were reported. Mixed results were reported in reducing backlogs at the agency level—some backlogs decreased while others increased, as reported from 2002 through 2004. The number of requests that an agency processes relative to the number it receives is an indicator of whether an agency’s backlog is increasing or decreasing. Six of the 25 agencies we reviewed reported processing fewer requests than they received each year for fiscal years 2002, 2003, and 2004— therefore increasing their backlogs (see fig. 4). Nine additional agencies also processed less than they received in two of these three years. In contrast, five agencies (CIA, Energy, Labor, SBA, and State) had processing rates above 100 percent in all three years, meaning that each made continued progress in reducing their backlogs of pending cases. Thirteen agencies were able to make at least a small reduction in their backlogs in 1 or more years between fiscal years 2002 and 2004. FOIA does not require agencies to make records available within a specific amount of time. As I mentioned earlier, Congress did not establish a statutory deadline for making releasable records available, but instead required agencies to make them available promptly. Agencies, however, are required to inform requesters within 20 days of receipt of a request as to whether the agency will comply with the request. For 2004, the reported time required to process requests by track varied considerably among agencies (see table 5). Eleven agency components reported processing simple requests in less than 10 days, as evidenced by the lower value of the reported ranges. These components are part of the Departments of Energy, Education, Homeland Security, Health and Human Services, the Interior, Justice, Labor, Transportation, the Treasury, and Agriculture. On the other hand, some organizations are taking much more time to process simple requests, such as components of Energy, Interior, and Justice. This can be seen in upper end values of the median ranges greater than 100 days. Components of four agencies (Interior, Education, Treasury, and VA) reported processing complex requests quickly—in less than 10 days. In contrast, several other agencies (DHS, Energy, Justice, Transportation, Education, HHS, HUD, State, Treasury, and Agriculture) reported components taking longer to process complex requests, with median days greater than 100. Four agencies (HHS, NSF, OPM, and SBA) reported using single-track processing. The processing times for single track varied from 5 days (at SBA) to 182 days (at an HHS component). Based on the data in agency annual reports, it was not feasible to determine trends at the agency level in the amount of time taken to process requests (reported annually as the median number of days to process requests). This is largely because many agencies have reported median processing times at a component level, making it difficult to derive overall agency median processing times. Nearly half (12 of 25) of the agencies reported median times at a component level. Although this practice does not provide agency- level indicators, it provides visibility into differences in processing times among the various components of agencies, which can sometimes be substantial. In summary, FOIA continues to be a valuable tool for citizens to obtain information about the operation and decisions of the federal government. Agencies have received steadily increasing numbers of requests and have also continued to increase the number of requests that they process. Despite this increase in processing requests, the backlog of pending cases continues to grow. Given this steadily increasing workload, it will remain critically important that strong oversight of FOIA implementation continue in order to ensure that agencies remain responsive to the needs of citizens. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. If you should have questions about this testimony, please contact me at (202) 512-6240 or via e-mail at koontzl@gao.gov. Other major contributors included Barbara Collier, John de Ferrari, and Elizabeth Zhao. In addition to rights under the Freedom of Information Act (FOIA), individuals also have rights of access to government information under the Privacy Act of 1974. The Privacy Act restricts the federal government’s use of personal information. More precisely, it governs use of information about an individual that is maintained in a “system of records,” which is any group of records containing information about an individual from which information is retrieved by individual identifier. With regard to access, the Privacy Act gives individuals the right to have access to information about themselves that is maintained in a system of records so that they can review, challenge, and correct the accuracy of personal information held by the government. While both laws generally give individuals the right of access to information (subject to exemptions), there are several important differences: ● While FOIA generally gives a right of access to all federal government records, the Privacy Act applies only to records pertaining to an individual that are retrieved by individual identifier. ● While FOIA generally gives “any person” a right of access to records, the Privacy Act gives access to only the subject of a particular record and only if that person is a U.S. citizen or a lawfully admitted permanent resident alien. ● While FOIA exempts categories of records from public release, including where disclosure would constitute an unwarranted invasion of personal privacy, the Privacy Act’s exemptions pertain to a variety of the act’s requirements, not just access (e.g., that agencies account for all disclosures of personal information, that they maintain only relevant and necessary personal information, and that they notify the public of their sources for obtaining records of personal information). Under current Department of Justice guidance, agencies are to treat an individual’s requests for his or her own records as a request under FOIA as well as the Privacy Act. This is intended to ensure that individuals are fully afforded their rights under both laws. As a practical matter, it appears that agencies generally consider requests for access to one’s own records as FOIA requests, without any separate accounting as Privacy Act requests. These requests are referred to as “first-party requests” and their addition to agency FOIA statistics can been seen, for example, in the large numbers of FOIA requests reported by agencies such as VA and SSA. Apart from questions about the role of the Privacy Act in FOIA decisions, privacy questions are often dealt with independently under FOIA. The act’s two privacy exemptions protect from public release information about individuals in “personnel and medical files and similar files” and “information compiled for law enforcement purposes,” the disclosure of which would constitute an “unwarranted invasion of personal privacy.” These statutory provisions have resulted in an analysis that involves a “balancing of the public’s right to disclosure against the individual’s right to privacy.” This approach led, for example, the Supreme Court to decide that there is a significant private interest in the “practical obscurity” of criminal history records even though they are officially public records. The development and refinement of such privacy principles continues as agencies and the courts make new “balancing” decisions in FOIA cases. Accordingly, it is difficult to definitively describe the extent of privacy protection under FOIA, or to characterize federal privacy protection as limited to the terms of the Privacy Act. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Freedom of Information Act (FOIA) establishes that federal agencies must provide the public with access to government information, thus enabling them to learn about government operations and decisions. To help ensure appropriate implementation, the act requires that agencies report annually to the Attorney General, providing specific information about their FOIA operations. GAO has reported previously on the contents of these annual reports for 25 major agencies. GAO was asked to describe the FOIA process and discuss the reported implementation of FOIA. Although the specific details of processes for handling FOIA requests vary among agencies, the major steps in handling a request are similar across the government. Agencies receive requests, usually in writing (although they may accept requests by telephone or electronically), which can be submitted by any organization or member of the public. Once requests are received, the agency responds through a process that includes several phases: initial processing, searching for and retrieving responsive records, preparing responsive records for release, approving the release of the records, and releasing the records to the requester. According to data reported by agencies in their annual FOIA reports, citizens have been requesting and receiving an ever-increasing amount of information from the federal government through FOIA. The number of requests that agencies received increased by 71 percent from 2002 to 2004. Further, agencies reported they have been processing more requests--68 percent more from 2002 to 2004. For 92 percent of requests processed in 2004, agencies reported that responsive records were provided in full to requesters. However, the number of pending requests carried over from year to year--known as the backlog--has also been increasing, rising 14 percent since 2002.
Dual-eligible beneficiaries—individuals eligible for both Medicare and Medicaid—generally fall into two categories: low-income seniors (those aged 65 years old and over) and individuals with disabilities under the age of 65 years. Requirements to protect the rights of beneficiaries under both programs are of particular importance to dual-eligible beneficiaries because of their generally greater health care needs. Several efforts have been made in the past to better integrate care for dual-eligible beneficiaries. Medicare is a federally financed program that in 2011 provided health insurance coverage to nearly 49 million beneficiaries—people age 65 and older, certain individuals with disabilities, and those with end-stage renal disease. In Medicare FFS, beneficiaries may choose their health care providers among any enrolled in Medicare. However, CMS also contracts with MA organizations, private entities that offer managed care plans to Medicare beneficiaries. As of 2011, about 25 percent of Medicare beneficiaries were enrolled in a MA plan. As part of the agency’s oversight of MA plans, CMS responds to complaints from beneficiaries and other parties, conducts surveillance, and conducts compliance audits. CMS responds to complaints from beneficiaries, health care providers, and other parties that come into the agency through a 1-800-MEDICARE phone line. It is through this mechanism that CMS generally resolves issues that are beneficiary- specific. CMS conducts surveillance by having routine discussions with managed care organizations, monitoring plan-submitted data, and tracking and monitoring complaint rates by MA plan and complaint category. CMS uses compliance audits to assess whether a managed care organization’s operations are consistent with federal laws, regulations, and CMS policies and procedures. Audits typically involve a combination of desk reviews of documents submitted by MA organizations, and, at CMS’s discretion, site visits. Medicaid is a joint federal-state program that finances health care coverage for certain low-income individuals.matching funds for services provided to Medicaid beneficiaries, each state must submit a state Medicaid plan for approval by CMS. The state Medicaid plan defines how the state will operate its Medicaid program, including which populations and services are covered. States must operate their Medicaid programs within broad federal parameters. While complying with these federal requirements, however, states have the flexibility to tailor their programs to the populations they serve, including the imposition of additional protections for beneficiaries. For example, states generally are required by federal Medicaid law to cover certain benefits, while other benefits may be included at a state’s option. Subject to CMS approval, states may choose to operate a portion of or their entire Medicaid programs as FFS or managed care. With respect to managed care, states vary widely in terms of the scope of services they provide and the populations they enroll. States have certain options when considering whether to enroll Medicaid beneficiaries into managed care, including whether enrollment is voluntary or mandatory. States may obtain the authority to mandatorily enroll Medicaid beneficiaries into managed care plans with CMS approval of a state plan amendment. However, under federal law, states cannot require certain categories of beneficiaries, including dual-eligible beneficiaries, to mandatorily enroll under this authority. states may obtain the authority to enroll Medicaid beneficiaries, including dual-eligible beneficiaries, into managed care through the approval of two types of Medicaid waivers: Section 1115 of the Social Security Act provides the Secretary of Health and Human Services with the authority to grant states waivers of certain federal Medicaid requirements and allow costs that would not otherwise be eligible for federal funds for the purpose of demonstrating alternative approaches to service delivery. Under a 1915(b) waiver, the Secretary may waive certain Medicaid requirements, allowing states to operate a managed care program to the extent it is cost-effective and consistent with the purposes of the program. See 42 U.S.C. § 1396u-2(a)(1)-(2). managed care plans. However, more recently states are beginning to move dual-eligible beneficiaries into managed care plans as well. In 2010, about 9.3 percent of dual-eligible beneficiaries were enrolled in Medicaid managed care plans. Another type of waiver, the 1915(c) waiver, is the primary means by which states provide home- and community-based services (HCBS) to Medicaid beneficiaries. Under a 1915(c) waiver, states can provide HCBS that may not be available under the state’s Medicaid plan to beneficiaries that would, if not for the services provided under the waiver, require institutional care.may provide under a 1915(c) waiver or through the state’s Medicaid plan, in addition to other services such as respite care, personal care, and case management. Home health care is one of the services that states At the federal level, CMS oversight of state Medicaid programs includes monitoring the programs and providing guidance to states. States must provide assurances to CMS that they have mechanisms in place to ensure that any managed care organization with which the state contracts complies with federal regulations in order to obtain approval for enrolling Medicaid beneficiaries into managed care. Though CMS is not a party to the contract, states are required to obtain CMS approval of the contracts between states and managed care organizations in order to qualify for federal funding. States administer the day-to-day operations of their Medicaid programs. At the state level, requirements for Medicaid managed care plans are often included as part of the contract between the state and the managed care plan and may derive from federal or state law, regulations, or policies. States generally oversee managed care plans through a combination of informal and formal monitoring that may include regular meetings, reviews of plan-submitted reports, audits, and financial reviews. Medicare and Medicaid have a number of requirements intended to protect the rights of beneficiaries, some of which are of particular importance to dual-eligible beneficiaries. Medicare and Medicaid have requirements that specify the circumstances under which a beneficiary may be compelled to enroll in a managed care plan instead of obtaining services through the FFS program. How beneficiaries are enrolled in managed care, for example whether the enrollment is mandatory or voluntary, could have implications for dual- eligible beneficiaries who may have more serious health care needs and who, because of cognitive impairments, may require assistance in understanding their options or the implications of their choices. In general, federal law and regulations do not specifically require MA plans or Medicaid managed care plans to cover services provided by a beneficiary’s previous providers if that provider is not in the plan’s network when a beneficiary first enrolls in a plan or switches plans. There are limited circumstances when managed care plans are required to cover such services during a transition period. Medicare and Medicaid also have certain federal requirements for managed care plans to ensure coordination of at least some services for beneficiaries. Dual-eligible beneficiaries often have complex health care needs and, therefore, may see several different providers. Accordingly, continuing relationships with providers, as well as ensuring coordination of care, is of particular importance to this population. Medicare and Medicaid have requirements for managed care plans to maintain provider networks that ensure beneficiaries can access a range of health care providers and obtain services in a timely manner. Within Medicaid managed care, provider participation problems have been specifically noted for specialty and dental care. Medicare and Medicaid have requirements about the type and format of materials provided to beneficiaries to promote enrollment into a managed care plan or communicate information about coverage and costs. Inappropriate marketing practices have in the past led some Medicare beneficiaries to enroll in MA plans in which they had not intended to enroll or that did not meet their health care needs. Inappropriate marketing may include activities such as providing inaccurate information about covered benefits and conducting prohibited marketing practices, such as door-to- door marketing without appointments and providing potential beneficiaries with meals or gifts of more than nominal value to induce enrollment. Medicare and Medicaid have requirements about how beneficiaries can qualify for certain services and the scope of coverage provided. According to CMS, two services where coverage differences between Medicare and Medicaid are particularly problematic for dual-eligible beneficiaries are nursing facility services and home health care. While both programs cover these benefits, they differ in terms of how a dual- eligible beneficiary can qualify for the benefit and the scope of the coverage provided. As a result, there can be cost-shifting between the programs. Nursing Facility Services. Medicare and Medicaid both set requirements for the conditions a beneficiary must meet to become eligible for coverage of nursing facility services. Medicare’s coverage of nursing facility care is limited to 100 days of posthospital skilled nursing facility (SNF) services. SNF services may only be provided in an inpatient setting and include skilled nursing (such as intravenous injections, administration of prescription medications, and administration and replacement of catheters); room and board; and physical, occupational, and speech language therapies. In contrast, Medicaid’s coverage of nursing facility services includes skilled nursing, rehabilitation needed due to injury, disability or illness, and long-term care. Under federal law, state Medicaid programs must cover nursing facility services for qualified individuals age 21 or over. All states have chosen to also offer the optional benefit of nursing facility services for individuals under 21 years of age. Medicare beneficiaries may continue to need nursing facility care after their Medicare benefit is exhausted. In such instances beneficiaries may pay privately or use any long-term care insurance they may have. In certain circumstances, the beneficiaries may also be eligible for Medicaid if, for example, they spend enough of their resources to meet Medicaid eligibility rules in their state. If the beneficiary does become dually eligible, the beneficiary may then qualify for Medicaid coverage of nursing facility services, beyond what Medicare covers. Overlapping coverage of nursing facility care in Medicare and Medicaid provides nursing facilities with a financial incentive to transfer dual-eligible beneficiaries back to hospitals when nursing facility care is being paid for by Medicaid. By transferring dual-eligible beneficiaries from a nursing facility to a hospital, the nursing facility will qualify for what is generally a higher payment under Medicare when beneficiaries are readmitted and require skilled nursing services. One study of hospitalizations among dually eligible nursing facility residents in 2005 found that approximately 45 percent of hospitalizations among beneficiaries receiving Medicare SNF services or Medicaid nursing facility services were potentially avoidable. Home Health Care. Medicare and Medicaid both set requirements for how a beneficiary can qualify for home health services, and state Medicaid programs further refine these requirements for Medicaid coverage. Medicare’s home health benefit covers skilled nursing services, physical therapy, speech-language pathology, occupational therapy, medical social services, and medical equipment. required to cover home health services for certain categories of beneficiaries, including those who are entitled to nursing facility services under the state plan. Under Medicaid’s home health benefit, states must cover nursing services, home health aide services, and medical supplies and equipment for use in the home. States may also choose to cover physical, occupational, or speech therapy under this benefit. The Medicare Payment Advisory Commission reported that some states have tried to increase the proportion of home health services for dual-eligible beneficiaries covered by Medicare, rather than Medicaid. For instance, some states have required home health agencies to show proof of a Medicare denial for home health services for a dual-eligible beneficiary before covering the service under Medicaid. Medicare may also cover home health aide services on a part-time or intermittent basis if they are needed as support services for skilled nursing services. of daily living (ADL), instrumental activities of daily living (IADL), supervision or guidance with ADLs, or a mix of those. Beneficiaries’ ability to contest a determination that their benefits will be denied, reduced, or terminated is a basic right provided for both Medicare and Medicaid beneficiaries. must follow depends on whether the benefit being contested is a Medicare or Medicaid benefit. Both Medicare and Medicaid have standard appeals processes and expedited appeals processes in cases of urgent need. In this report, we only describe the standard Medicare and Medicaid appeals processes. January 2012, 84 PACE sites in 29 states enrolled about 21,000 beneficiaries. There are key differences in enrollment choice requirements across the Medicare and Medicaid programs, the FFS and managed care payment systems and the selected states. Certain consumer protection requirements are unique to managed care plans in areas such as continuity and coordination of care and provider networks. Other consumer protection requirements also differ across the programs, payment systems, and selected states. The MMA authorized a type of MA plan referred to as a special needs plan to address the unique needs of certain categories of Medicare beneficiaries, including dual-eligibles. Pub. L. No. 108-173, § 231, 117 Stat. 2066, 2207 (2003) (codified, as amended, at 42 U.S.C. § 1395w-21(a)(2)(A)(ii)). SNPs, including D-SNPs, have been reauthorized several times since their establishment was first authorized in 2003. Within Medicare, enrollment in managed care is always voluntary, whereas state Medicaid programs can require enrollment in managed care in certain situations. In Medicare, beneficiaries—including dual- eligible beneficiaries—are enrolled in FFS unless they select an MA plan. In general, beneficiaries may select an MA plan voluntarily when they first become eligible for Medicare, during an annual coordinated election period, or during special election periods, such as when an MA plan’s contract is terminated or discontinued in the area where a beneficiary lives or when CMS determines that beneficiaries meet exceptional conditions. CMS has created a special election period for dual-eligible beneficiaries, and accordingly, they may opt into MA or FFS or change MA plans at any time. They generally may select any MA plan, including D-SNPs, that serves the area where they live, though the number of plans available varies by area. MA plans may limit the providers from whom Medicare beneficiaries, including dual-eligible beneficiaries, may receive covered services, whereas beneficiaries in Medicare FFS may receive covered services from any provider enrolled in Medicare. In contrast, a Medicaid beneficiary’s ability to choose to remain in FFS or enroll in managed care depends on how the state structures its Medicaid program. As an alternative to FFS, states can structure their Medicaid programs to require enrollment in managed care, or allow beneficiaries to choose between the two payment systems. Unlike in Medicare, states can mandatorily enroll beneficiaries, including dual-eligible beneficiaries, into Medicaid managed care with CMS approval of a section 1115 demonstration waiver or section 1915(b) waiver. States mandating enrollment into a managed care plan generally must provide beneficiaries a choice of at least two plans, except in specific circumstances, such as in rural areas. Otherwise, similar to Medicare, the number of available Medicaid managed care plans varies, depending on how many plans are offered where the beneficiary lives. Subject to the terms and conditions of the waiver, Medicaid managed care plans can generally limit beneficiaries, including dual-eligible beneficiaries, to the plan’s provider network, whereas beneficiaries in Medicaid FFS may receive covered services from any qualified Medicaid provider. CMS officials informed us, however, that for dual-eligible beneficiaries, the agency does not have the authority to allow states to limit the beneficiary’s choice of provider for Medicare covered benefits when mandatorily enrolling them into Medicaid managed care plans. State requirements vary with respect to Medicaid enrollment into FFS or managed care and for choice between plans if beneficiaries enroll in managed care. For example: Arizona: The state requires Medicaid beneficiaries, including all dual- eligible beneficiaries, to enroll in either the Medicaid acute or long- term managed care programs under a section 1115 demonstration waiver. Beneficiaries in the state’s acute care program have a choice among managed care plans. Beneficiaries enrolled in the long-term care program generally have a choice of plans if they live in or are moving to Pima or Maricopa counties, which are the state’s two most populated counties and the only counties where more than one long- term care plan operates. California: Medicaid beneficiaries’ choice of payment system varies depending on where they live among California’s 58 counties. In 28 mostly rural counties all dual-eligible beneficiaries are in FFS. In the remaining 30 counties, the state has three different Medicaid programs for enrolling beneficiaries in managed care. Dual-eligible beneficiaries in 14 California counties are mandatorily enrolled in managed care through a county-organized health system, which is a health plan operated by a county that contracts with the state to provide health care benefits to Medicaid beneficiaries. Because there is only one plan in each of these counties, beneficiaries enrolled in the county-organized health systems have no choice between plans. Dual-eligible beneficiaries in 14 counties may choose between FFS or the state’s Two-Plan managed care program. Under the Two-Plan program, beneficiaries who enroll in managed care have a choice between the Local Initiative Health Plan—a public agency that is independent of the county—and a commercial plan. In the remaining two counties—Sacramento and San Diego—dual-eligible beneficiaries can choose between FFS or the Geographic Managed Care program.eligible beneficiaries who enroll in managed care can choose from several commercial managed care plans. Under the Geographic Managed Care program, dual- Minnesota: Dually eligible seniors in Minnesota must enroll in one of two managed care programs, and dual-eligible beneficiaries who became eligible on the basis of their disabilities can choose whether to enroll in a managed care program. Minnesota has a 1915(b)(c) waiver to mandatorily enroll dually eligible seniors in a Medicaid managed care plan. Alternatively, these seniors can choose to enroll in a participating D-SNP that, under contract with the state, integrates Medicare and Medicaid financing and services. Dual-eligible beneficiaries age 18 to 64 who have disabilities may opt back into FFS. If they do not opt into FFS, they are enrolled in managed care and may opt into FFS at any time. Whether dual-eligible beneficiaries have a choice between plans varies depending on the county where they live. North Carolina: According to North Carolina Medicaid officials, all Medicaid beneficiaries, including dual-eligible beneficiaries, are in FFS, and the majority of dual-eligible beneficiaries are in a primary care case management program, where primary care providers are paid on a FFS basis, in addition to receiving a monthly fee to perform certain care coordination activities. In general, federal law and regulations do not specifically require MA plans or Medicaid managed care plans to cover services provided by a beneficiary’s previous providers if that provider is not in the plan’s network when a beneficiary first enrolls in a plan or switches plans. There are limited circumstances when managed care plans are required to cover such services during a transition period. MA organizations must ensure that covered services are available and accessible to beneficiaries. In implementing this requirement, CMS officials informed us that MA organizations must ensure that there is no gap in coverage or problems with access to medically necessary services when a beneficiary must change to a plan-contracted provider. For example, a beneficiary receiving oxygen may need to switch to a new oxygen supplier when the beneficiary joins the MA plan or switches plans. As the beneficiary transitions to the new oxygen supplier, the MA plan may need to reimburse the beneficiary’s previous provider to ensure that there is no gap in coverage, and that the beneficiary maintains access to medically necessary services. MA organizations also must ensure coordination of services through various arrangements with network providers, such as programs that coordinate plan services with community and social services in the area, such as services offered by an area agency on aging. Additionally, D-SNPs or any other type of SNP must provide dual-eligible beneficiaries with access to appropriate staff to coordinate or deliver all services and benefits, and coordinate communication among plan personnel, providers, and the dual-eligible beneficiaries themselves. As with Medicare, Medicaid managed care plans are generally not required to cover services by a beneficiary’s previous provider if that provider is not in the plan’s network. However, states determine to what extent Medicaid managed care plans must provide beneficiaries with access to a person or entity primarily responsible for coordinating health services on the basis of the services the plan must cover. Individual states may have continuity of care requirements for their Medicaid managed care programs, as defined under an applicable waiver or state requirements. For example, in California, beneficiaries newly enrolled in managed care plans may request and receive coverage of the completion of treatments initiated by an out-of-network provider with whom they have an ongoing relationship in certain circumstances, such as for the treatment of a terminal illness or acute condition. The length of the coverage depends on the stability of the beneficiary and the nature of the medical condition. Minnesota also has continuity of care requirements. For newly enrolled dually eligible seniors, managed care plans must cover medically necessary services that an out-of-network provider, a different plan, or the state agency authorized before the dual-eligible beneficiary enrolled with the managed care plan. However, the plan is allowed to require that the dual-eligible beneficiary receive the services from an in-network provider if that would not create an undue hardship on the dual-eligible beneficiary and the services are clinically appropriate. Arizona requires that managed care plans employ transition coordinators to ensure continuity of care, and beneficiaries in the state’s long-term care program receive additional case management for help navigating their care options, including planning, coordinating and facilitating access to services. Medicare and state Medicaid programs require managed care plans to meet certain provider network standards. In order to limit beneficiaries to a network of providers, MA organizations must meet a number of requirements, including maintaining and monitoring a network of appropriate providers, under contract, that is sufficient to provide adequate access to covered services to meet the needs of enrolled beneficiaries. Federal guidelines establish minimum network adequacy requirements that vary depending on a county’s geographic designation, such as whether the county is urban or rural. MA organizations must contract with sufficient numbers of certain types of provider specialists per 1,000 Medicare beneficiaries in a county. For example, MA plans operating in rural counties must have at least one full-time equivalent (FTE) primary care provider per 1,000 beneficiaries. Additionally, MA organizations must demonstrate that their network meets geographic requirements related to the time and distance it takes beneficiaries to travel to providers. For example, in rural counties, MA organizations must also ensure that 90 percent of beneficiaries can access primary care providers within 40 minutes and 30 miles of travel. MA organizations must also ensure that the networks include a minimum number of specialists and specialty facilities, such as at least one cardiologist and one skilled nursing facility per 1,000 beneficiaries. States must ensure, through contracts, that Medicaid managed care plans demonstrate that they have the capacity to serve expected enrollment in the service area in accordance with state standards. For example, plans must submit documentation to the state that they offer an appropriate range of preventive, primary care, and specialty services, and maintain a network of providers that is sufficient in number, mix, and geographic distribution to meet the needs of the enrollees. Unlike Medicare, however, federal Medicaid laws and regulations do not establish minimum provider network requirements and instead generally require states to set the standards for access to care. Accordingly, subject to the terms and conditions of a waiver, if applicable, states may establish requirements that define the minimum number and types of providers in a network, and time and distance requirements between beneficiaries and primary care providers, as well as other network adequacy requirements. For example, Medicaid managed care plans in California must maintain a provider to beneficiary ratio of one FTE primary care physician for every 2,000 beneficiaries and one FTE physician from any specialty for every 1,200 beneficiaries. In some counties, managed care plans must also ensure that primary care physicians are located within 30 minutes or 10 miles of beneficiaries’ residences, unless the state approves an alternative time and distance standard. In addition to time and distance standards, Arizona requires managed care plans to contract with a specific number of providers, as determined by the state, which varies by each area that the plan serves. Arizona also defines time frames for beneficiaries to be able to access some services. For example, Arizona Medicaid managed care plans must provide beneficiaries with access to emergency primary care services within 24 hours, urgent primary care services within 2 days, and routine primary care services within 21 days. Plans must include a minimum number of other types of providers—such as dentists, pharmacists, home- and community-based services providers, and behavioral health facilities—in their networks as well. Medicare and Medicaid each have requirements regarding the marketing materials managed care organizations send out to beneficiaries. MA organizations are required to comply with a variety of federal requirements for marketing materials that are intended to promote enrollment in a specific health plan. For example, organizations generally must submit marketing materials to CMS for review prior to sending to beneficiaries. Materials must provide an adequate written description of the plan’s benefits and services and comply with formatting requirements, such as a minimum font size. In addition, MA organizations must translate materials into any non-English language that is the primary language of at least 5 percent of individuals in the plan’s service area. Medicaid managed care plans are required to comply with both federal and state requirements for marketing materials. For example, Medicaid managed care plans must obtain state approval before distributing marketing materials. Federal requirements also mandate that materials must be written in an easily understood language and format, though requirements for format are not precisely defined. In addition, plans must make information, including Medicaid marketing materials, available in each prevalent language spoken by enrollees and potential enrollees in the plan’s service area. Subject to the terms and conditions of a waiver, if applicable, states may further define requirements for readability and material translation, while other states may prohibit marketing altogether. For example, marketing materials in California must be translated when a threshold number of beneficiaries whose primary language is not English live in a managed care plan’s service area or in the same or adjacent zip codes, among other circumstances. Additionally, all Medicaid marketing materials in California must be written at no higher than the sixth-grade reading level and be approved by the state Medicaid agency. Arizona prohibits Medicaid managed care plans from conducting any marketing that is solely intended to promote enrollment; all marketing materials must include a health message. Other requirements affecting dual-eligible beneficiaries, such as coverage for nursing facility and home health services and the appeals process, vary between Medicare and Medicaid, and between the FFS and managed care payment systems. Beneficiaries must meet different requirements to qualify for nursing facility care under Medicare and Medicaid. As required under federal law, to qualify for Medicare’s 100 days of SNF coverage, beneficiaries must have a prior hospital stay. Specifically, Medicare beneficiaries must have been hospitalized for medically necessary inpatient hospital care for at least 3 consecutive calendar days, not including the discharge date. In addition, Medicare beneficiaries must meet certain criteria, such as: (1) require skilled nursing or rehabilitative services on a daily basis, (2) services must only be rendered for a condition the beneficiary had during hospitalization, and (3) require daily skilled services that can only be provided in an SNF. Unlike Medicare, Medicaid does not limit coverage of nursing facility services to beneficiaries with prior hospital stays and states must cover services provided by qualified SNFs as well as other types of nursing facilities. Instead, federal Medicaid law requires states to provide coverage of nursing facility services for adult Medicaid beneficiaries when medically necessary. Within broad federal parameters, such as requiring that beneficiaries need daily, inpatient nursing facility services that are ordered by a physician, states may impose additional requirements when defining coverage for this benefit.beneficiaries in North Carolina, must show they meet the requirements to be in a nursing facility by demonstrating some qualifying conditions. Qualifying conditions may include, among other things, (1) the need for services that require a registered nurse a minimum of 8 hours a day and other personnel working under the supervision of a licensed nurse, (2) the need for restorative nursing to maintain or restore maximum function or prevent deterioration in individuals with progressive disabilities as much as possible, or (3) the need for a specialized therapeutic diet. In Arizona, the acute care program covers nursing facility services for a limited amount of time (90 days) if hospitalization will occur otherwise or the treatment cannot be administered safely in a less restrictive setting, such as at home. Medicaid beneficiaries in the long-term care program in Arizona have longer-term nursing facility benefits. Beneficiaries qualify for the long-term-care program when they have a functional or medical condition that impairs functioning to the extent that the individual would be deemed at immediate risk of institutionalization. Impairments may include, among other things, requiring nursing care, daily nurse supervision, regular medical monitoring, or presenting impairments with cognitive functioning or self-care with ADLs. Beneficiaries must meet different requirements to qualify for home health services under Medicare and Medicaid. Medicare beneficiaries may only qualify for home health coverage when they are confined to a home or an institution that is not a hospital, SNF, or nursing facility. Additionally, the beneficiary must be under the care of a physician, need intermittent physical therapy, speech language pathology skilled nursing care,services, or have a continuing need for occupational therapy services, and receive services under a written plan of care. Unlike in Medicare, states may not require that Medicaid beneficiaries be confined to a home or institution in order to qualify for home health benefits. Instead, federal regulations require that in order to qualify for Medicaid coverage, home health services generally must be provided at the beneficiary’s home and ordered by a physician as part of a written As with nursing plan of care, which must be renewed every 60 days. facility services, state Medicaid programs have the authority to impose additional conditions on accessing home health benefits, and accordingly state programs vary with respect to when beneficiaries may qualify for home health benefits. For example, to receive home health coverage in North Carolina, a physician must order the home health services and must have face-to-face contact with the beneficiary 90 days prior to care or 30 days after care, and the services must be medically necessary. Beneficiaries must have at least one reason, from a specific list of reasons set by the state, to receive home health services.beneficiaries might qualify if they require assistance leaving the home because of a physical impairment or medical condition, or if they are medically fragile. For example, Medicare and Medicaid each have multiple levels of appeals, which vary further between each program’s managed care and FFS delivery systems. Accordingly, the appeals processes that dual-eligible beneficiaries encounter differ depending on whether the benefit being denied, reduced, or terminated is a Medicare or Medicaid benefit, and whether the individual is enrolled in FFS or managed care. Both programs require that beneficiaries in either managed care and FFS be notified of their right to appeal. Medicare has five levels of appeals for managed care and FFS. 1. Beneficiaries enrolled in an MA plan must first request review by the MA organization. In FFS, beneficiaries first request review by the claims processing contractor that made the initial coverage decision.2. For MA, if the adverse determination is affirmed, the issues must be automatically reviewed and resolved by an independent review entity, and for Medicare FFS, beneficiaries may request review by a qualified For beneficiaries in either FFS or managed independent contractor.care, this is the earliest opportunity for their claim to be reviewed by a different entity than the one that made the original determination. 3. If the independent entity affirms the adverse determination, MA and FFS beneficiaries have the right to request a hearing before an administrative law judge (ALJ) in the Department of Health and Human Services if the amount remaining in controversy—the projected value of denied services or a calculated amount based on charges for services provided—is above a specified level. 4. MA and FFS beneficiaries who are dissatisfied with the ALJ hearing decision may request review by the Medicare Appeals Council. 5. MA and FFS beneficiaries may request judicial review by a U.S. district court of a decision by the Medicare Appeals Council if the amount in controversy is above a specified level. 42 C.F.R. §§ 405.1100, 422.608. The Medicare Appeals Council undertakes a de novo review and may issue a final decision, dismiss the appeal, or remand the case to the ALJ with instructions for rehearing the case. Medicare FFS beneficiaries may also request this review if the ALJ dismissed their case or failed to issue a timely decision. There are no federal Medicare requirements that benefits continue during the appeals processes for either managed care or FFS, nor do federal law and regulations require that FFS or MA beneficiaries receive personal assistance, including assistance from a care coordinator or other specialist, when navigating the appeals process. However, there are certain protections incorporated into the appeals process that are designed to assist Medicare beneficiaries. For example, Medicare beneficiaries may appoint a representative to assist them with an appeal. Beneficiaries also may seek assistance through the Office of the Medicare Beneficiary Ombudsman, which is responsible for resolving inquiries and complaints for all aspects of the Medicare program, through the 1-800-MEDICARE help line. States can structure their Medicaid appeals processes within the parameters of federal requirements. Medicaid FFS beneficiaries must have access to a fair hearing before a state agency for certain actions, including when benefits are terminated, suspended, or reduced. Once a final agency decision is made, Medicaid FFS beneficiaries may request a judicial review of the decision if permitted under state law. Beneficiaries in Medicaid managed care plans must have the ability to appeal a termination, suspension, or reduction of a benefit to the plan as well as have access to a state fair hearing. States determine whether beneficiaries must first exhaust their appeal to their Medicaid managed care plans before they may request a state fair hearing. During these appeals, benefits generally must continue in certain circumstances. Benefits generally must continue until a final agency decision is made if the beneficiary is mailed a notice of action and files an appeal before the date of the action. As in Medicare, neither federal regulations nor law require that beneficiaries in Medicaid FFS have access to personal assistance in navigating the appeals process. States, however, have the option of providing this assistance to FFS beneficiaries. For beneficiaries in Medicaid managed care, plans must give beneficiaries assistance with completing appeal forms and taking other procedural steps, including providing interpreter services and toll-free numbers for assistance. The appeals processes in the states that we reviewed varied, for instance as to whether a beneficiary in managed care has to appeal to his or her managed care plan first. For example, Arizona requires beneficiaries to first appeal to their managed care plan before requesting a state fair hearing. In contrast, Minnesota allows beneficiaries to request a state fair hearing without first appealing to their managed care plan. Dual-eligible beneficiaries in Minnesota may also request help from the state ombudsman, and county boards are required to designate a coordinator to assist the state Medicaid agency, including coordinating appeals with the ombudsman. See appendix II for a more detailed summary of these consumer protection requirements across programs, payment systems, and selected states. CMS and states used compliance and enforcement actions that ranged from informal written notices to contract terminations in order to help ensure MA organizations and Medicaid managed care plans complied with consumer protection requirements. CMS used both compliance and enforcement actions to bring noncompliant MA organizations into compliance with federal requirements. Compliance actions are intended to prompt managed care organizations to address issues of noncompliance, such as the timing of disenrollments, whereas enforcement actions impose a penalty on a managed care organization and are taken to address more significant violations. According to CMS, the nature of each violation is considered when determining the appropriate compliance or enforcement action and the actions generally proceed through the process in a step-by-step manner before enforcement actions are taken. CMS takes compliance actions against MA organizations to address violations that are identified during the agency’s monitoring and auditing activities. According to agency guidance, compliance actions are appropriate when the MA organization: (1) demonstrates sustained poor performance over a period of time; (2) has a noncompliance issue that involves a large number of beneficiaries; or (3) does not meet its contractual requirements. The lowest-level compliance action is a notice of noncompliance, which may be an e-mail from a CMS contract manager to a managed care plan stating that an aspect of the program is out of compliance. The notice of noncompliance requests the plan respond with how it will address the problem and may be followed by a warning letter from CMS that identifies a limited and quickly fixable issue of noncompliance that requires immediate remedy. If CMS determines that the noncompliance affects multiple beneficiaries and represents an ongoing or systemic inability by the plan to adhere to Medicare requirements, CMS will send a formal letter to the MA’s chief executive officer stating the concern and requiring the organization to develop and implement a corrective action plan (CAP). The CAP must address the deficiencies identified by CMS, provide an attainable time frame for implementing corrective actions, and devise a process for the managed care organization to validate and monitor that the corrective actions were taken and remain effective. Between January 1, 2010, and June 30, 2012, CMS took 546 compliance actions generally related to consumer protection requirements of importance to dual-eligible beneficiaries. (See table 1.) These issues of noncompliance that could potentially affect dual-eligible beneficiaries were identified during CMS’s ongoing oversight activities, analysis of plan deliverables, and complaints made by beneficiaries or providers. Of these 546 actions, 386, or 70 percent, were due to marketing issues.sent notice of noncompliance or warning letters for marketing issues related to misrepresentation of requirements for enrollment and use of unapproved marketing materials. CMS The three states we reviewed used similar sequences of actions to identify and address issues of noncompliance by their Medicaid managed care plans. State officials reported that when noncompliance issues are suspected they first notify the plans and give them an opportunity to remedy the problem. Subsequent deficiencies may require a Medicaid managed care plan to initiate a corrective action plan that the state would monitor to assure the appropriate changes are made. Between January 1, 2010, and June 30, 2012, the three states reported they took a total of 157 compliance actions against their Medicaid managed care plans. These actions ranged from sending warning letters, issuing notices to cure, requiring CAPs, and imposing financial penalties.common action taken by the states was to require a managed care plan to implement a CAP. The reasons that states required Medicaid managed care plans to institute CAPs during the reporting period varied. California and Minnesota identified noncompliance with the appeals and grievance process that required corrective actions.take corrective actions to ensure beneficiaries were able to access appropriate translation services. The majority of the CAPs required by Minnesota’s Medicaid office dealt with plan management of beneficiary appeals and grievances. Arizona required CAPs to address the use of unapproved marketing materials. After appeals, the next most frequent reason states requested CAPs on consumer protection requirements was to address problems regarding beneficiaries’ access to providers, services, or drugs. Figure 2 illustrates the reasons why Medicaid managed care plans were required to implement a CAP for the 91 CAPs issued during the period. We received written comments on a draft of this report from the Department of Health and Human Services, which are reprinted in appendix III, and technical comments, which we incorporated as appropriate. The department noted that the report was an accurate assessment of the programs we reviewed, and added that the Medicare- Medicaid Coordination Office has already made some progress aligning the requirements between the two programs in the area of appeals. CMS has developed a revised Notice of Medicare Denial of Coverage (or Payment) that includes optional language to be used in cases where a Medicare health plan enrollee also receives full Medicaid benefits that are being managed by the Medicare health plan. The revised Notice of Medicare Denial of Coverage (or Payment) is under review as part of the approval process. We will send copies of this report to the Administrator of CMS and interested congressional committees. We will also make copies available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or KingK@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. In Arizona’s Medicaid program, called the Arizona Health Care Cost Containment System, nearly all Medicaid beneficiaries, including dual- eligible beneficiaries, are enrolled in the acute care managed care program for Medicaid benefits. Individuals requiring long-term supports and services are enrolled in a separate long-term care managed care program. Both managed care programs operate under a section 1115 demonstration waiver. As of January 2012, Arizona had about 110,000 dual-eligible beneficiaries enrolled in Medicaid managed care, and over 1.3 million total Medicaid beneficiaries. California’s Medicaid system, called Medi-Cal, includes 28 counties with only a fee-for-service (FFS) system and 30 counties with one of three different managed care programs. Of the managed care options, the first is a county-operated health system, which requires nearly all Medicaid beneficiaries in participating counties, including dual-eligible beneficiaries, to enroll in a plan operated by the county. The second is the Two-Plan model, which has a commercial plan and a Local Initiative Health Plan—a public agency that is independent of the county. In the third program, called Geographic Managed Care, several commercial plans are offered as choice for beneficiaries. In both the Two-Plan and Geographic Managed Care programs, most Medicaid beneficiaries in the county are mandatorily enrolled in a managed care plan, but dual-eligible beneficiaries are in FFS unless they enroll voluntarily into one of the health plans. California officials reported that, as of June 2012, 26 percent of California’s approximately 1 million dual-eligible beneficiaries are enrolled in managed care, while the remaining 74 percent of dual-eligible beneficiaries are in FFS. In Minnesota, dual-eligible beneficiaries who are 65 years old and older are required to enroll in a managed care program called Minnesota Senior Care Plus (MSC+). As of June 2012, about 10,500 dually eligible beneficiaries 65 and older in Minnesota are enrolled in MSC+. Alternatively, dual-eligible beneficiaries 65 and older may choose to enroll in the Minnesota Senior Health Options (MSHO) program. Unlike MSC+ plans, MSHO plans are Medicare special needs plans that also have contracts with the state for the Medicaid benefits package, which enables the plans to integrate Medicare and Medicaid financing and services for dual-eligible beneficiaries. About 35,700 dually eligible beneficiaries 65 and older in Minnesota are enrolled in a MSHO plan. Dual-eligible beneficiaries age 18 to 64 who have a disability are enrolled in the state’s Special Needs Basic Care managed care program if they do not opt into Medicaid FFS. As of July 2012, about 39,000 of the state’s disabled population (both dual-eligible beneficiaries and non-dual-eligible beneficiaries) are enrolled in Special Needs Basic Care. More than 21,000 of these disabled beneficiaries were dual-eligible beneficiaries. According to Minnesota Medicaid officials, as of June 2012, almost 14 percent, or about 114,500, of Minnesota’s Medicaid population are dual-eligible beneficiaries, and 59 percent of the state’s dual-eligible beneficiaries are enrolled in managed care. According to North Carolina Medicaid officials, North Carolina primarily operates its Medicaid program through a primary care case management (PCCM) program, called Carolina Access. Under the PCCM program, primary care providers are paid on a FFS basis, in addition to receiving a monthly fee for certain care coordination activities. The state’s enhanced PCCM program, called Community Care of North Carolina, includes 14 networks of primary care providers that are responsible for an enhanced set of care coordination activities. According to North Carolina Medicaid officials, dual-eligible beneficiaries are assigned a primary care provider in one of the 14 networks, but they may opt out of the program if they choose a healthcare provider outside of the state’s Medicaid program. As of June 2012, according to state officials, about 13 percent of the state’s Medicaid population was dually eligible for Medicare and Medicaid and almost 68 percent of dual-eligible beneficiaries in the state were enrolled in the state’s PCCM program. Table 3 describes selected consumer protection requirements for Medicare and Medicaid fee-for-service (FFS), and table 4 describes selected consumer protection requirements for Medicare Advantage (MA) and Medicaid managed care. In addition to the contact named above, Randy DiRosa (Assistant Director), Lori Achman, Anne Hopewell, Lisa Motley, Laurie Pachter, Pauline Seretakis, Lillian Shields, and Hemi Tewarson made key contributions to this report.
Dual-eligible beneficiaries are low-income seniors and individuals with disabilities enrolled in Medicare and Medicaid. In 2010, there were about 9.9 million dual-eligible beneficiaries. Both programs have requirements to protect the rights of beneficiaries. These requirements are particularly important to dual-eligible beneficiaries, who must navigate the rules of both programs and generally have poorer health status. To help inform efforts to better integrate the financing and care for dual-eligible beneficiaries, GAO (1) compared selected consumer protection requirements within Medicare FFS and Medicare Advantage, and Medicaid FFS and managed care, and (2) described related compliance and enforcement actions taken by CMS and selected states against managed care plans. GAO identified consumer protections of particular importance to dual-eligible beneficiaries on the basis of expert interviews and literature, including protections related to enrollment, provider networks, and appeals. GAO reviewed relevant federal and state statutes, regulations, and policy statements, and interviewed officials from CMS and four states selected on the basis of their share of dual-eligible beneficiaries and use of managed care (Arizona, California, Minnesota, and North Carolina). GAO analyzed data on compliance and enforcement actions in Medicare Advantage and Medicaid managed care from January 1, 2010, through June 30, 2012. Medicare and Medicaid consumer protection requirements vary across programs, payment systems--either fee-for-service (FFS) or managed care--and states. Within Medicare, enrollment in managed care through the Medicare Advantage (MA) program must always be voluntary, whereas state Medicaid programs can require enrollment in managed care in certain situations. For example, Arizona requires nearly all beneficiaries, including dual-eligible beneficiaries, to enroll in managed care, but in North Carolina all beneficiaries are in FFS. In addition, Medicare and state Medicaid programs require managed care plans to meet certain provider network requirements to ensure beneficiaries have adequate access to covered services. For example, MA plans in rural counties must have at least one primary care provider per 1,000 beneficiaries. Subject to federal parameters, states establish network requirements for their Medicaid programs. For example, in California every plan must have at least one primary care provider per 2,000 beneficiaries. Finally, Medicare and Medicaid also have different appeals processes that do not align with each other. The Medicare appeals process has up to five levels of review for decisions to deny, reduce, or terminate services, with certain differences between FFS and MA. In Medicaid, states can structure appeals processes within federal parameters. States must establish a Medicaid appeals process that provides access to a state fair hearing and Medicaid managed care plans must provide beneficiaries with the right to appeal to the plan, though states can determine the sequence of these appeals. For example, Arizona requires beneficiaries to appeal to the managed care plan first, while a beneficiary in Minnesota may go directly to a state fair hearing without an initial appeal to the managed care plan. Both the Centers for Medicare & Medicaid Services (CMS), the agency that administers the Medicare program and oversees states' operation of Medicaid programs, and states took a range of compliance and enforcement actions to help ensure that MA and Medicaid managed care organizations complied with their consumer protection requirements. Between January 1, 2010, and June 30, 2012, CMS took 546 compliance actions against MA organizations on the issues GAO identified as generally related to consumer protections of particular importance to dual-eligible beneficiaries. Compliance actions included notices, warning letters, and requests for corrective action plans (CAP). During the same period, CMS took 22 enforcement actions against MA organizations, including the imposition of 17 civil money penalties--nearly all for late or inaccurate marketing materials. For five serious violations, CMS suspended enrollment into the MA plan and suspended the MA plan's ability to market to beneficiaries. Similarly, states used notices, letters, fines, and CAPs to improve Medicaid managed care plan compliance with Medicaid consumer protection requirements. During the same period, Arizona, California, and Minnesota required managed care plans to undertake 91 corrective action plans, 52 percent of which related to problems with plans' appeals and grievances processes. In commenting on a draft of the report, the Department of Health and Human Services noted that the report was an accurate assessment of the programs we reviewed.
Women represent a small but rapidly growing segment of the nation’s veteran population. In 1982, there were about 740,000 women veterans. By 1997, that number had increased by 66 percent to over 1.2 million, or 4.8 percent, of the veteran population. Today, women make up nearly 14 percent of the active duty force and, with the exception of the Marine Corps, 20 percent of new recruits. By 2010, women are expected to represent over 10 percent of the total veteran population. Like male veterans, female veterans who serve on active duty in the uniformed services for the minimum amount of time specified by law and who were discharged, released, or retired under conditions other than dishonorable are eligible for some VA health care services. Historically, veterans’ eligibility for health care services depended on factors such as the presence and extent of service-connected disabilities, income, and period and conditions of military service. In 1996, the Congress passed the Veterans Health Care Eligibility Reform Act (P.L. 104-262), which simplified the eligibility criteria and made all veterans eligible for comprehensive outpatient care. To manage its health care services, the act requires VA to establish an enrollment process for managing demand within available resources. The seven priorities for enrollment are (1) veterans with service-connected disabilities rated at 50 percent or higher; (2) veterans with service-connected disabilities rated at 30 or 40 percent; (3) former prisoners of war, veterans with service-connected disabilities rated at 10 or 20 percent, and veterans whose discharge from active military service was for a compensable disability that was incurred or aggravated in the line of duty or veterans who with certain exceptions and limitations are receiving disability compensation; (4) catastrophically disabled veterans and veterans receiving increased non-service-connected disability pensions because they are permanently housebound; (5) veterans unable to defray the cost of medical care; (6) all other veterans in the so-called “core” group,including veterans of World War I and veterans with a priority for care based on presumed environmental exposure; and (7) all other veterans. VA may create additional subdivisions within each of these enrollment groups. With the growing women veteran population came the need to provide health care services equivalent to those provided to men. Over the past 15 years, GAO, VA, and the Advisory Committee on Women Veterans have assessed VA services available to women veterans. In 1982, GAO reported that VA lacked adequate general and gender-specific health care services, effective outreach for women veterans, and facilities that provided women veterans appropriate levels of privacy in health care delivery settings. In 1992, GAO reported that VA had made progress in correcting previously identified deficiencies, but some privacy deficiencies and concerns about availability and outreach remained. In response to concerns about the availability of women veterans’ health care and to improve VA’s delivery of health care to women veterans, the Congress enacted the Women Veterans Health Programs Act of 1992 (P.L. 102-585). This act authorized new and expanded health care services for women. In 1993, VA’s Office of the Inspector General (OIG) for Health Care Inspections reported that problems—such as women veterans’ not always being informed about eligibility for health care services as well as VA’s lack of appropriate accommodations, medical equipment, and supplies to treat women patients in VA medical facilities—still existed. In December 1993, the Secretary of the Department of Veterans Affairs, established VA’s first Women Veterans’ Program Office (WVPO). In November 1994, the Congress enacted legislation (P.L. 103-446) that required VA to create a Center for Women Veterans to oversee VA programs for women. As a result, WVPO was reorganized into the Center for Women Veterans. The Center Director reports directly to the VA Secretary. In compliance with the Government Performance Results Act, VA has a strategic plan that includes goals for (1) monitoring the trends in women’s utilization of VA services from fiscal years 1998 through 2001, (2) reporting on barriers and actions to address recommendations to correct them, and (3) assessing progress in correcting deficiencies from fiscal years 1999 through 2001. VA’s performance plan also includes goals that target women veterans currently enrolled in VA for aggressive prevention and health promotion activities to screen for breast and cervical cancer. VA has taken several actions to remove barriers identified by GAO, VA, and women veteran proponents over the years that prevent women veterans from obtaining care in VA medical facilities. First, VA has increased outreach efforts to inform women veterans of their eligibility for benefits and health care services. However, it has not evaluated these efforts, so it is not known how knowledgeable women veterans are about their eligibility for health care services. VA has also designated coordinators to assist women veterans in accessing the system. In addition, VA has identified and begun to correct patient privacy deficiencies in inpatient and outpatient settings. VA has surveyed its facilities on two occasions to determine the extent to which privacy deficiencies exist. In fiscal year 1998, VA spent more than $67 million correcting deficiencies and has developed plans for correcting remaining deficiencies. However, VA continues to face obstacles addressing the inpatient mental health needs of women veterans in a predominantly male environment and has established a task force to look at this and other issues. Over the last few years, VA has increased its outreach efforts to inform women veterans of their eligibility for care in response to problems highlighted by GAO, VA, and veteran service organizations between 1982 and 1994. We and others reported that (1) women veterans were not aware that they were eligible to receive health care in VA and (2) VA did not target outreach to women veterans, routinely disseminate information to service organizations with predominantly female memberships, or adequately inform women of changes in their eligibility. To address these concerns, VA has targeted women veterans during outreach efforts at the headquarters, regional, and local levels. At the headquarters level, a number of outreach strategies have been implemented. For example, the Center for Women Veterans, as part of its strategic and performance goals for 1998 through 2000, is placing greater emphasis on the importance of outreach to women and the need for improved communication techniques. Since the inception of WVPO and the Center for Women Veterans, VA has held an average of 15 to 20 town meetings a year, along with other informational seminars. The Center also provided informational seminars at the annual conventions of the Women’s Army Corp and the Women Marines; American Legion; American Veterans of World War II, Korea, and Vietnam; and Disabled American Veterans. The Center also provided information on VA programs for women veterans and other women veterans’ issues at national training events for county and state veteran service officers and their counterparts in the national Veterans’ Service Organizations. Further, the Center established a web site within the VA home page to provide women veterans with information about health care services and other concerns as well as the opportunity to correspond with the Center via electronic mail. At the regional and local levels, VBA regional and benefit offices, VA medical centers, and Vet Centers display posters, brochures, and other materials that focus specifically on women veterans. They also send representatives to distribute these materials and talk to women veterans during outreach activities, such as health fairs and media events, that are used to publicize the theme that “Women Are Veterans, Too.” The VA facilities we visited were conducting similar activities. For example, the medical center in New Orleans directed its Office of Public Relations to work closely with the women veterans coordinator to develop an outreach program. The New Orleans Vet Center women veterans coordinator told us that she expanded her outreach efforts to colleges with nursing schools in an effort to reach women veterans who do not participate in veteran-related activities. In addition, VBA regional offices coordinate with the Department of Defense to provide information on VA benefits and services to prospective veterans during Transition Assistance Program (TAP) briefings. In addition to providing information to active-duty personnel who plan to separate from the military on how to transition into civilian life, TAP briefings provide information on the benefits they may be eligible for as veterans as well as how to obtain them. Although VA has greatly increased its outreach efforts, it has not yet evaluated the effectiveness of these efforts. Women veterans organizations have acknowledged the increase in VA’s outreach efforts directed at women veterans but continue to express concern about whether women veterans are being reached and adequately informed about their eligibility for benefits and health care services. Several women veterans we talked with during our site visits said they found out by chance—during casual conversations—that they were eligible for care. Women veterans and agency staff acknowledged that “word of mouth” from satisfied patients appears to be one of the most effective ways to share information about various benefits and services to which women veterans may be entitled. In March 1998, the Advisory Committee for Women Veterans, the Center for Women Veterans, and the National Center for Veterans Statistics provided specific questions for inclusion in VA’s Survey of Veterans for Year 2000 to address the extent to which women veterans are becoming more knowledgeable about their eligibility for services. This survey should allow VA to assess the effectiveness of its outreach to women veterans. Women veterans coordinators assist in obtaining care, advocate for women veterans’ health care, and collaborate with medical center management to make facilities more sensitive to women veterans. This role was established in 1985 because women veterans did not know how to obtain health care services once they became aware of their eligibility for these services. However, in 1994, VA’s OIG reported that these coordinators often lacked sufficient training and time to perform effectively; many women veterans coordinators performed in this capacity on a part-time basis. VA has since provided women veterans coordinators training and more time to carry out their roles and help them provide better assistance to women veterans in accessing VA’s health care system and obtaining care. In an effort to make them more effective in this role, in 1994, VA implemented a national training program designed to increase women veterans coordinators’ awareness of their roles and familiarize them with women veterans’ issues. The program is administered by a full-time women veterans’ national education coordinator and staff at the Birmingham Regional Medical Education Center. In addition, the women veterans coordinators at VA’s medical centers in Tampa and Bay Pines developed a mini-residency training program for women veterans coordinators. This program, approved in 1995, is the only training program of its kind and is offered for newly appointed women veterans coordinators. To allow women veterans coordinators more time to perform their duties, in 1994, VA established positions for additional full-time women veteran coordinators at selected VA medical centers and four full-time VBA regional women veterans coordinators. As of January 1998, about 40 percent of the women veterans coordinators in VA medical facilities were full-time. According to VA’s Advisory Committee on Women Veterans, the women veterans coordinator program has proven to be one of the most successful initiatives recommended by the committee. Patient privacy for women veterans has been a long-standing concern, and VA acknowledges that the correction of physical barriers that limit women’s access to care in VA facilities will be an ongoing process. Between 1982 and 1994, GAO and VA’s OIG reported that physical barriers, including hospital wards with large open rooms having 8 to 16 beds and a lack of separate bath facilities, concerned women veterans and inconvenienced staff. Female patients had to compete with patients in isolation units for the limited number of private rooms in VA hospitals. Also, hospitals with communal bathrooms sometimes required staff to stand guard or use signs indicating that the bathroom was occupied by female patients. As required by section 322 of the Veterans’ Health Care Eligibility Reform Act of 1996, VA conducted nationwide privacy surveys of its facilities in fiscal years 1997 and 1998 to determine the types and magnitude of privacy deficiencies that may interfere with appropriate treatment in clinical areas. The surveys revealed numerous patient privacy deficiencies in both inpatient and outpatient settings. The fiscal year 1998 survey also showed that 117 facilities from all 22 Veterans Integrated Service Networks (VISN) spent nearly $68 million in construction funds in fiscal year 1998 to correct privacy deficiencies. Another 91 facilities from 20 of the 22 VISNs used a total of 130 alternatives to construction to eliminate deficiencies. These alternatives included actions such as initiating policy changes that would admit female patients only to those areas of the hospital that have the appropriate facilities or issuing policy statements that gynecological examinations would only be performed in the women’s clinics or contracted out. In addition, VISN and medical center staff developed plans for correcting and monitoring the remaining deficiencies. Although the 1998 survey showed that VA has improved the health care environment to afford women patients comfort and a feeling of security, the survey also revealed that many deficiencies still exist. (See table 1.) Of those facilities with deficiencies, the most prevalent inpatient deficiency was a lack of sufficient toilet and shower privacy, and the most prevalent outpatient deficiency was the lack of curtain tracks in various rooms. Consistent with VA’s strategic plan for fiscal years 1998 through 2003, a task force with representatives from VHA and the Center for Women Veterans was established to identify, prioritize, and develop plans for addressing five major issues related to women veterans’ health care, one of which was patient privacy. Further, VA plans to assess the progress made in correcting patient privacy deficiencies on an annual basis between fiscal years 1999 and 2001. VA requires that each facility have a plan for corrective action and a timetable for completion; VA has also directed each VISN to integrate the planned corrections into their construction programs. To correct the remaining deficiencies, VA projects it will spend $49.3 million in fiscal year 1999 and $41 million in fiscal year 2000. Over this same period, medical centers are estimated to spend approximately $647,000 more in discretionary funds to make some of these corrections. Beyond fiscal year 2000, VA projects it will spend an additional $77 million in capital funds; six facilities in VISNs 6 and 7 account for 58 percent of the total projected spending for beyond fiscal year 2000. While correcting privacy deficiencies has allowed VA to better accommodate women veterans’ health care needs, VA faces other problems accommodating women veterans who need inpatient mental health treatment. In the summer of 1998, VA established a task force of clinicians and women veterans coordinators to assess mental health services for women veterans and make recommendations by June 1999 for improving VA’s capacity to provide inpatient psychiatric care to this population. This task force is chaired by the Director of the Center for Women Veterans. VA data show that in fiscal year 1997, mental disorder was the most prevalent diagnosis—26.4 percent—for women veterans hospitalized. While inpatient psychiatric accommodations are available in VA facilities, in most instances the environment is not conducive to treating women veterans. In 1997, VA’s Center for Women Veterans reported that women veterans hospitalized on VA mental health wards for post-traumatic stress disorder, substance abuse, or other psychiatric diagnoses are often the only female on a ward with 30 to 40 males. This disparate ratio of women to men discourages women from discussing gender-specific issues and also makes it difficult to provide group therapy addressing women’s treatment issues. Women veterans also noted that they were concerned about their safety in this environment. These concerns included male patients engaging in inappropriate remarks or behavior and inappropriate levels of privacy. During our site visits, two women veterans expressed similar concerns. VA has inpatient psychiatric facilities that have separate psychiatric units for women veterans within five areas: Battle Creek, Michigan; Brockton-West Roxbury, Massachusetts; Central Texas Health Care System; Brecksville-Cleveland, Ohio; and Palo Alto, California, Health Care System. Women veterans often do not want to or are unable to leave families and support systems to travel to one of these facilities for treatment. Staff at one of the medical centers we visited in Florida told us that a few of their women patients who had been sexually traumatized would be better served in an inpatient setting, but the nearest suitable inpatient facilities were those in California and Ohio, and the patients did not want to go that far from home. VA’s greater emphasis on women veterans’ health has resulted in an increase in both the availability and use of general and gender-specific services, such as pap smears, mammograms, and reproductive health care. Some VA facilities offer a full complement of health care services, including gender-specific care, on a full-time basis in separate clinics designated for women. Others may only offer certain services on a contractual or part-time basis. According to program officials and the women veterans coordinators at the locations we visited, the variation in the availability and delivery of services is generally influenced by the medical center directors’ views of the health needs of the potential patient population, available resources, and demand for services. The increase in the availability of services and the emphasis on women veterans’ health have contributed to increases in the number of women veterans served and visits made, with the exception of inpatient care.Between fiscal years 1994 and 1997, the number of gender-specific services provided to women veterans increased about 42 percent, from over 85,000 to over 121,000. The total number of inpatient and outpatient visits made during this same period increased nearly 56 percent, from about 893,000 to almost 1.4 million. Over the past 10 years, GAO, VA’s OIG, and VA’s Advisory Committee on Women Veterans reported that VA was not providing adequate care to women veterans and was not equipped to do so. These organizations found that VA (1) was not providing complete physical examinations, including gynecological exams for women; (2) lacked the equipment and supplies to provide gender-specific care to women, such as examination tables with stirrups and speculums; and (3) lacked guidelines for providing care to women. As a result, VA began to place more emphasis on women veterans’ health and looked for ways to respond to these criticisms. For example, to ensure equity of access and treatment, VA designated women veterans’ health as a special emphasis program that merited focused attention. In 1983, VA began requiring medical centers to develop written plans that show how they will meet the health care needs of women veterans. At a minimum, these plans must define (1) that a complete physical examination for women is to include a breast and gynecological exam, (2) provisions for inpatient and outpatient gynecology services, and (3) referral procedures for necessary services unavailable at VA facilities. VA also procured the necessary equipment and supplies to treat women. In addition, VA established separate clinics for women veterans in some of its medical facilities. The locations with separate women’s clinics that we visited had written plans that contained the required information and the necessary equipment and supplies to provide gender-specific treatment to women. Also, we found evidence that women veterans coordinators were monitoring services provided to ensure proper care and follow-up. VA is more able to accommodate women patients than they were prior to the early 1990s. In 1997, VA provided inhouse 94 percent of the routine gynecological care sought by women veterans, even though its number of women’s clinics fell from 126 in 1994 to 96 in 1998. Some VA facilities closed their women’s clinics because of consolidation or implementation of primary care. Others are phasing their women’s programs into primary care, especially the facilities that had limited services available in the women’s clinic. This is consistent with VA’s efforts to enhance the efficiency of its health care system. For example, since September 1995, VA has or is in the process of merging the management and operations of 48 hospitals and clinic systems into 23 locally integrated systems. While women veterans can obtain gender-specific services as well as other health care services at most VA medical facilities, the extent to which care, especially gender-specific care, is available varies by facility. Some facilities offer a full array of routine and acute gender-specific services for women—such as pap smears, pelvic examinations, mammograms, breast health, gynecological oncology, and hormone therapy—while others offer only routine or preventive gender-specific care. Of the five sites we visited, two—Tampa and Boston—are Women Veterans’ Comprehensive Health Centers, which enable women veterans to obtain almost all of their health care within the center. Generally, these centers have full-time providers who may also be supported by other clinicians who provide specialty care on a part-time basis. For example, the Tampa Women Veterans’ Comprehensive Health Center, which provided care to about 3,000 women in 1997, is run by a full-time internist, who is supported by another internist, four nurse practitioner primary care providers, a gynecologist, a psychologist, a psychiatrist, and other health care and administrative support staff. The Tampa center as well as the Boston center provide their services 5 days a week. Other facilities offer less extensive services than those offered within the comprehensive centers. For example, the VA medical center in Washington, D.C., offers only routine or preventive gender-specific care by a nurse practitioner about 4.5 days a week; acute or more specialized gynecological care is only offered one-half day a week with the assistance of a gynecologist and general surgeon through a sharing agreement with a local Department of Defense facility. Other health care services are available within the medical center. The range of services provided by VA’s nonhospital-based clinics varies as well. Some nonhospital-based clinics, like the one in Orlando, may provide services almost comparable to those provided by the medical center or comprehensive center. Other centers, however, offer services on a more limited basis. For example, the nonhospital-based clinic associated with one of the medical centers we visited only offers gynecological services once a week. According to the women veterans coordinator, the average waiting time to get a gynecology appointment at this clinic is 51 days. She explained that if the situation is urgent, arrangements are made to have the patient seen in the urgent care clinic or at the medical center. Variation in services at VA medical facilities may be attributable to one or more factors, such as medical center management’s views on the level of services needed, funding, staffing, and demand for services. The specific services offered and the manner in which they are delivered within VA facilities are left to the discretion of medical center or VISN management. Most VA facilities did not receive additional funding to establish health care programs for women and had to provide these additional services while maintaining or minimally affecting existing programs. Initially, VHA provided additional funding for the comprehensive centers, which was supplemented by funds from the medical center’s budget. VHA also provided some additional funding in 1994 to help VA facilities obtain resources to counsel women veterans who had been sexually traumatized. The women veterans coordinators at the five medical center locations we visited told us that the medical center directors have a strong commitment to providing quality health care to women veterans and that without such support, it would be difficult to meet women veterans’ needs or improve the women’s health program. Some women’s programs had to be established and operated using the medical center’s existing funding and resources, which included no provisions for these services. Although the Tampa and Boston centers received VHA funding to establish a comprehensive health center, they still had to obtain additional funding from the medical center, which required management’s support. The availability of gender-specific services may also be influenced by the demand for these services. At two locations we visited, the women veterans coordinators told us that when they first opened their women’s clinics, they operated on a very limited scale—one-half to 1 day a week. However, the demand was so overwhelming that they increased their operations to 5 days a week. On the other hand, the women veterans population in some areas is small and may not generate a high enough demand for gender-specific services to provide them in a separate women veterans’ health care program or within the medical center on a full-time basis. In such instances or if a very small number of female veterans have historically availed themselves of the services, it may not be cost-effective to provide these services in-house, as pointed out by VA’s OIG in 1993.Instead, it may be appropriate to contract out for these services. In the 1990s, women veterans’ utilization of gender-specific services has increased significantly. Outpatient and inpatient visits among women veterans at VA facilities increased more than 50 percent between fiscal years 1994 and 1997. Based on VA’s survey of its medical facilities, the number of women veterans receiving gender-specific services increased about 42 percent from more than 85,000 to almost 121,200 during the same period. (See table 2.) Between fiscal years 1994 and 1997, the number of pap smears and mammograms provided to women veterans increased dramatically. In fiscal year 1997, almost 53,000 women veterans received pap smears, a 63-percent increase over fiscal year 1994. Similarly, in fiscal year 1997, about 36,400 women veterans received mammograms, a 47-percent increase over fiscal year 1994. Reproductive health care services, which cover the entire range of gynecological services, were provided to over 31,800 women veterans in fiscal year 1997, 12 percent more than in fiscal year 1994. According to VA, the pap smear and mammography examination rates among appropriate and consenting women veterans in 1997 are 90 percent and 87 percent, respectively. VA has set goals to increase the mammography and pap smear examination rates from their current base rates to 92 percent and 90 percent, respectively, by fiscal year 2003. Women veterans have also used more health care services in general, consistent with VA’s goal to meet women veterans’ total health care needs. With the exception of inpatient care, the number of women veterans who use VA health care services and the frequency of their usage continue to increase. For the 5-year period between fiscal years 1992 and 1997, the women veteran population increased only slightly, from about 1.2 million to 1.23 million. However, between fiscal years 1994 and 1997, the number of women veterans who received outpatient care increased 32 percent, from about 90,000 to more than 119,000, and the total number of outpatient visits increased 57 percent, from nearly 870,000 to over 1.3 million. (See table 3.) During this same period, the number of women veterans who received inpatient care decreased about 5 percent, from about 14,350 to 13,700, which is consistent with VA’s—and the nation’s—current health care trend to deliver services in the least costly, most appropriate setting. VA’s health care program for women veterans has made important strides in the last few years. VA has made good progress informing women veterans about their eligibility for services and the services available, assisting women veterans in accessing the system, correcting patient privacy deficiencies, and increasing health care services for women veterans. Most importantly, VA’s efforts are reflected in the increased availability of services and utilization by women veterans. While progress has been made, the importance of sustaining efforts to address the special needs of women veterans will only increase, as their percentage of the total veteran population is projected to double by 2010. Coincident with these demographic changes, VA is making changes to the way it delivers health care, including integrating and consolidating facilities while maintaining quality of care and implementing eligibility reform. VA will need to be especially vigilant to ensure that women veterans’ needs are appropriately addressed as it implements these overall changes. In its comments on a draft of this report, VA agreed with our findings that progress has been made in serving women veterans through the Women Veterans’ Health Program but that additional work is required to improve outreach to women, rectify privacy issues, and improve inpatient environments for women undergoing inpatient psychiatric treatment. VA also provided some technical comments, which we have incorporated as appropriate. VA’s comments are included as appendix II. Copies of this report are being sent to the Secretary of Veterans Affairs, other appropriate congressional committees, and interested parties. We will also make copies available to others on request. If you have any questions about the report, please call me or Shelia Drake, Assistant Director, at (202) 512-7101. Jacquelyn Clinton, Evaluator-in-Charge, was a major contributor to this report. To determine the barriers to women veterans obtaining care within VA, we talked with officials in the Center for Women Veterans, within the Office of the Secretary; VHA; two VBA regional offices; and Readjustment Counseling Centers (Vet Centers) in Tampa, Florida; St. Petersburg, Florida; and New Orleans, Louisiana. We also reviewed Women Veterans Advisory Committee reports and talked with women veterans and VA program officials in five medical centers: Bay Pines, Florida; Boston, Massachusetts; Tampa; New Orleans; and Washington, D.C. These medical centers were selected because they offered different levels of health care services to women veterans. To determine the availability and use of gender-specific care, we discussed women veterans’ health care services with officials at VA’s Central Office and the five medical centers we visited. We reviewed VA medical centers’ women veterans health care plans, relevant VA policy directives, and women veterans health care utilization data. We also reviewed quality assurance plans, annual reports, minutes of Women Veterans Advisory Committee meetings, outreach materials, and other written documentation and materials. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the status of the Department of Veterans Affairs' (VA) health care program for women, focusing on: (1) the progress VA made in removing barriers that may prevent women veterans from obtaining VA health care services; and (2) the extent to which VA health care services, particularly gender-specific services, are available to and used by women veterans. GAO noted that: (1) VA has made considerable progress in removing barriers that prevent women veterans from obtaining care; (2) VA has increased outreach to women veterans to inform them of their eligibility for health care services and designated women veterans coordinators to assist women veterans in accessing VA's health care system; (3) VA has also improved the health care environment in many of its medical facilities, especially with respect to accommodating the privacy needs of women veterans; (4) however, VA recognizes that it has more working these areas and plans to address concerns about the effectiveness of its outreach efforts and privacy barriers that still exist in some facilities; (5) in response to women veterans' concerns, VA has begun to assess its capacity to women veterans; (6) with regard to gender-specific services, VA's efforts to emphasize women veterans' health care have contributed to a significant increas of all services over the last 3 years; (7) the range of services differs by facility; services may be provided in clinics designated specifically for women veterans, or they may be provided in the overall medical facility health care system; (8) more importantly, utilization has increased significantly between 1994 and 1997; (9) for example, gender-specific services grew from over 85,000 to more than 121,000; and (10) during the same time period, the number of women veterans treated for all health care services on an outpatient basis increased by about 32 percent or 119,300.
The Food Stamp Program provides eligible low-income households with paper coupons or electronic benefits that can be redeemed for food in stores nationwide. FNS funds food stamp benefits and about half of the states’ administrative costs and establishes regulations for implementing the Food Stamp Program. FNS regulations require that states certify household eligibility at least annually and establish requirements for households to report changes that occur after they are certified. Recently, FNS introduced several options and waivers to food stamp rules and regulations in order to increase program access and reduce the reporting burden on working families while minimizing the potential for payment errors. These include options and waivers related to program eligibility, reporting requirements, extending food stamp benefits to households leaving TANF, and options related to TANF recipients. To monitor program accountability, FNS’s quality control system measures states’ performance in accurately determining food stamp eligibility and calculating benefits. States implement the Food Stamp Program by determining whether households meet established limits on gross income and assets, calculating monthly benefits for eligible households, and issuing benefits to households. The actual amount of the food stamp benefit is based on household income after certain deductions—including shelter, dependent care, and child support. To be eligible for benefits, a household’s gross income may not exceed 130 percent of the federal poverty level and the value of its assets may not exceed $2,000. If the household owns a vehicle worth more than $4,650, the excess value is included in calculating the household’s assets. Recipients of TANF cash assistance are automatically eligible for food stamps—a provision referred to as “categorical eligibility” — and do not have to go through a separate food stamp eligibility determination process. In the wake of welfare reform, many needy families that are no longer receiving TANF cash assistance may receive other TANF- funded services or benefits. FNS gave states the option to extend categorical eligibility to families receiving TANF-funded benefits or services. States can determine which TANF-funded services or benefits confer categorical eligibility to food stamps. FNS offers two options that states can use to allow households to own a vehicle worth more than the amount allowed in current regulations and remain eligible for food stamp benefits. One option allows states to replace the federal food stamp vehicle asset rule with the vehicle asset rule from any TANF assistance program, as long as the rule is more liberal than the federal rule. States adopting the rule of a TANF-funded program must apply it to all applicants for food stamp benefits. States can also use the categorical eligibility option as a way to exclude all vehicles, as well as other assets the family may have, from the determination of eligibility for food stamps. This option affects the food stamp eligibility only of families authorized to receive a TANF-funded service or benefit. After eligibility is established, households are certified to be eligible for food stamps for periods ranging from 1 to 24 months, with 3-, 6-, and 12- month periods the most common. The length of the certification period depends on household circumstances, but only households in which all members are elderly or disabled can be certified for more than 12 months. Once the certification period ends, households must reapply for benefits, at which time eligibility and benefit levels are re-determined. Households with stable income are generally given longer certification periods than households with fluctuating income. Prior to welfare reform, federal regulations required households to have a face-to-face interview with an agency worker at each re-certification. Current regulations give states the option to require only one face-to-face interview a year regardless of the length of the certification period. Between certification periods, households must report changes in their circumstances—such as household composition, income, and expenses— that may affect their eligibility or benefit amounts. States determine how frequently households must file reports. A state may require a household to submit a monthly report on their financial circumstances along with required verification even if nothing changed. If a household is not required to file a monthly report, it is required to report changes in income and other circumstances as they occur—called “change reporting.” States can require different types of reporting for different household types and generally require households with earnings to report more frequently than households with no earned income. FNS offers alternatives to monthly and change reporting: quarterly and semiannual reporting. Both of these reporting methods decrease the frequency with which households with earnings are required to report. FNS also offers three waivers to change reporting that reduce the reporting burden on households with earnings. (See table 1.) USDA now provides a transitional benefit option to states to help families leaving TANF retain their food stamp benefits. Because families leaving TANF are no longer automatically eligible for food stamps based on their receipt of TANF cash assistance, they cannot receive food stamps without a re-determination of eligibility. The Transitional Benefit Alternative, introduced in November 2000, gives states the option to continue to provide families with their same food stamp benefit amount for 3 months after they leave welfare. As part of its deliberations on food stamp reauthorization, the Congress is considering extending the transitional benefit to 6 months. Finally, recognizing that TANF and the Food Stamp Program generally are administered by the same agency at the local level, the 1996 welfare reform legislation provided an option for states to merge their TANF and Food Stamp Program rules into a single set of eligibility and benefit requirements for households receiving both TANF and food stamps. This option, called the Simplified Food Stamp Program, allows states to align all of their TANF and Food Stamp Program rules. The option also allows states to implement a portion of the simplified program in which only the food stamp work requirement is replaced by TANF’s work requirement. FNS monitors states’ performance by assessing how accurately they determine food stamp eligibility and calculate benefits. Under FNS’s quality control system, the states calculate their payment errors by drawing a statistical sample to determine whether participating households received the correct benefit amount. The states review case information and make home visits to determine whether households were eligible for benefits and received the correct benefit payment. FNS regional offices validate the results by reviewing a subset of each state’s sample to determine its accuracy and make adjustments to the state’s overpayment and underpayment errors as necessary. States are penalized if their payment error rate is higher than the national average, which was 8.9 percent in fiscal year 2000. Food Stamp Program payment errors occur for a variety of reasons. Overpayments can be caused by inadvertent or intentional errors made by recipients and caseworkers. According to FNS’ quality control system, the states overpaid food stamp recipients about $976 million in fiscal year 2000 and underpaid recipients about $360 million. A little over half of these errors occurred when state food stamp workers made mistakes, such as misapplying complex food stamp rules in calculating benefits. The remaining errors occurred because participants, either inadvertently or deliberately, did not provide accurate information to state food stamp offices. According to USDA, about half of all payment errors are due to an incorrect determination of the household’s income. In 1999, every state except one had a higher payment error rate among households with earnings as compared with households without earnings. Because their hours of work per week vary and they change jobs frequently, low-wage workers often have fluctuating incomes. Recipients are required to report these income changes, and eligibility workers must adjust their food stamp benefits correctly to avoid payment errors. In order to minimize payment errors, states usually certify households with earnings for shorter periods and require them to report more frequently than households with no earned income. Almost all states used one or more options or waivers to change their food stamp eligibility determination process. More than half of the states chose to confer categorical eligibility for food stamps to households receiving certain TANF-funded services or benefits. Thirty-three states used available options to exempt some or all vehicles from counting as assets. States used these options to increase the number of households to be eligible for food stamps, to simplify the administrative process for eligibility workers, and to support working families; however, most of these states considered them a cumbersome way to increase access to food stamps. Thirty-four states extended eligibility for food stamps to households that are eligible to receive TANF-funded services or benefits. Many states conferred categorical eligibility only to households receiving TANF-funded benefits such as emergency assistance and childcare; while some states conferred categorical eligibility to food stamp applicants simply by providing them with information and referral services paid for with TANF funds. For example, during the food stamp application process, clients who may be financially ineligible for food stamps could become categorically eligible for benefits by virtue of having received a referral to a specific TANF-funded program. Although the primary reason states gave for conferring categorical eligibility was to increase access to food stamps by making households who are eligible for a TANF-funded service automatically eligible for food stamps, states cited other benefits of this option. For example, by eliminating the need to calculate the value of a food stamp applicant’s assets, the eligibility worker’s administrative burden is reduced. Furthermore, five states noted that conferring categorical eligibility for food stamps makes children eligible for the school lunch program, even if the household does not actually qualify for a food stamp benefit. (See fig. 1.) While about two-thirds of the states used the categorical eligibility option, some states pointed out difficulties that the option created. For example, many individuals made categorically eligible for food stamps through receipt of a pamphlet or referral to a service may in fact not actually qualify for a food stamp benefit, possibly increasing the administrative burden on food stamp workers. In addition, several officials said they would like the food stamp rules pertaining to categorical eligibility simplified. They noted that categorical eligibility is determined in part by the source of the funding for the program under which the household receives noncash benefits or services. Because many programs have multiple funding sources, it can be difficult to determine whether a particular program meets the TANF funding requirements. Another official said that categorical eligibility is difficult to explain to staff. Other officials noted problems tied to the variation from state to state that the option creates. One official commented that allowing states to determine which of their welfare-funded services to use in granting categorical eligibility for food stamps could create a great deal of national variation in who can access this federal entitlement program. Using TANF-funded services as a basis for categorical eligibility, a state official explained, is a complicated way of excluding vehicles when determining food stamp eligibility. Thirty-three states used available options to exempt some or all vehicles from counting as assets in determining food stamp eligibility in order to increase access, support clients’ work efforts, or simplify eligibility determination for food stamp workers. (See fig. 2.) Twenty-nine of these states chose to replace their food stamp vehicle rules with their TANF program rules.While most of these states replaced their food stamp vehicle asset rules with their TANF cash assistance rules, a few states used rules from their TANF noncash assistance childcare programs. Seven states told us that they used the option to confer categorical eligibility to recipients of TANF-funded services as a way to exclude all vehicles and other assets from eligibility determination. Specifically, six of the seven states told us that they used categorical eligibility to increase access to food stamps and three said that they used it to support client work efforts. (See fig. 3.) While most states used available options to liberalize the way vehicles are considered in the food stamp eligibility determination process, 17 states used existing Food Stamp Program rules regarding vehicles. Seven of these states said that they could not replace their food stamp vehicle rules with TANF vehicle rules because their TANF rules were more restrictive than their food stamp rules. In at least one of these states, changes to TANF rules required approval by the state’s legislative body. State officials in almost half of the states told us that the Food Stamp Program’s vehicle asset rules should be changed to exempt at least one vehicle per household. Other state officials wanted the exemption value of a vehicle increased to reflect the current cost of vehicles. Almost all states used a reporting option or waiver to change the way households with earnings are required to report changes in their circumstances that could affect their eligibility for food stamps as well as their benefit amount. These options and waivers allowed states to alter the standard reporting methods of monthly and change reporting. Many states told us that they used reporting options and waivers to reduce their payment errors, to ease program administration, and to simplify paperwork requirements for households. Because some reporting options applied to specific households only, many states considered them somewhat restrictive. The most frequently used reporting alternatives were those that eliminated the requirement to report changes in earned income of $25 or more per month. Eighteen states chose a waiver allowing households to report changes in employment status, which includes changes in wage rates, number of hours worked in a week, and a move from part-time to full-time employment or vice-versa. Seventeen states chose the waiver to require recipients to report only changes in income that exceeded $80 or $100. (See fig. 4.) States are allowed to use more than one reporting option or waiver. Thirteen states used two or more alternatives. However, some states chose not to use any reporting options or waivers, citing concerns over payment errors and the cost and burden of implementation, such as the cost of reprogramming computer systems to implement a new reporting system. Ten states used the semiannual reporting option, and 5 states used the waiver allowing quarterly reporting. In these states, households with earned income are allowed to report semiannually or quarterly without reporting changes in between. Households subject to semiannual reporting are required to report if their gross income exceeds 130 percent of poverty. Should a household report a change that would increase the household’s food stamp benefit, the state must make the change; however, the state is generally not allowed to make changes that would reduce the food stamp benefit amount. States are held responsible only for errors resulting from miscalculating benefits at certification, or if income exceeds 130 percent of poverty and the change is not reported. State agencies are not held responsible for errors if the household experienced a change in its circumstances that the household did not report if the state’s policies do not require the household to report the change. States selecting the semiannual reporting requirement must certify households for at least a 6-month period, and they have the option to eliminate every other face- to-face interview because of the new rule requiring only one face-to-face interview a year. Although the semiannual reporting option provides states with an opportunity to reduce the reporting burden on working families with some impunity from payment errors, some states want to adjust the food stamp benefit in response to all reported changes in household income. Half of the states using the semiannual option requested and received a waiver allowing them to adjust benefits based on all changes reported by families. State officials gave various reasons for requesting this waiver to semiannual reporting. In some states, the Food Stamp Program shared the same computer system and database used for determining eligibility for other programs, such as TANF and Medicaid. Since these states link their programs, changes that families report to one program often automatically change the food stamp data, and states wanted the ability to adjust benefits according to this new information. Other states said that the waiver was useful because their food stamp workers have always adjusted food stamps based on reported changes; not to do so for all food stamp recipients would be confusing. Officials in 28 states said they are considering the semiannual reporting option. Nine states would implement the option only with the waiver allowing them to act on all reported changes in part because of computer integration issues. Others would consider the option with a waiver allowing them to apply it to all food stamp households, not just households with earnings. Twelve states are not using or considering the semiannual reporting option. Officials in these states told us the option is either too burdensome to implement, the rules are too complicated, or that it might increase payment errors. Officials from 38 states said that additional changes to the reporting requirements were needed. Some noted that states should be allowed to use the same reporting requirements for all households, not just households with earnings. Although states told us that a primary reason they used reporting options and waivers was to minimize the payment error associated with earnings, concern over payment accuracy affected states’ decisions regarding other options and waivers as well. For example, although FNS gave states the option to limit face-to-face interviews to once a year, some states continue to require households with earnings to come in more frequently because of concerns over payment accuracy. Officials in 45 states told us that the effect on their payment error rate was either the most important factor or a contributing factor in their decision to use particular options and waivers. As a result, officials in many states said that USDA’s quality control program should not focus solely on payment accuracy. State officials also suggested changes in the way that payment errors are calculated. For example, they noted that client and agency error should be counted separately from client error, because the agency had no control over whether the client reported required information correctly. Although only three states reported using the Transitional Benefit Alternative, many states told us they plan on using it. At the time of our interviews, the 3-month Transitional Benefit Alternative was not yet fully implemented, but states could request this option. Twenty states said that they were considering it. Twenty-seven states said they would implement the proposed 6-month Transitional Benefit Alternative if it became available. The primary reason that states would provide a transitional benefit is to support working families. Many states said that the option helped with the transition from welfare to work by stabilizing the families after they leave welfare by guaranteeing a fixed food stamp benefit regardless of how their income fluctuates during the transitional benefit period. (See fig. 5.) Some states that would use the 6-month option but not the 3-month option said that the additional 3 months of support to families making the transition from welfare to work would make the implementation costs worthwhile. The 12 states that had decided not to use transitional benefits said they were concerned about the implementation costs. At least eight of these states indicated that the computer changes required to implement the transitional benefit would be extensive. (See fig. 6.) Eighteen states said they were undecided about the 3-month option, and 14 states had not yet decided about the 6-month option. Several of the undecided states indicated that they were concerned about potential costs associated with reprogramming their computers. No state is implementing or plans to implement all aspects of the Simplified Food Stamp Program option. The main reason states gave for not choosing this option was that it was too complex and difficult to implement. The simplified program option was to be a vehicle for creating conformity between TANF and the Food Stamp Program by merging the programs’ rules into a single set of requirements for individuals receiving both types of assistance. However, as we reported earlier, since not all needy households receive both TANF and food stamps, the states selecting the simplified program option would, in effect, be operating three programs: one program for TANF recipients following state TANF rules; one program for food stamp recipients following federal food stamp regulations; and the simplified program for recipients of both food stamps and TANF. Furthermore, to whatever extent the states use the simplified program, they must also have demonstrated that total federal costs would not be more than the costs incurred under the regular Food Stamp Program—that is, the program has to be “cost neutral.” Figure 8 shows the reasons states gave for not choosing the option. In addition, while states are not planning to use the simplified program, some state officials indicated that it might be worthwhile to develop such a program if it could apply to all food stamp households, not just households receiving both TANF and food stamps. While no state is implementing all aspects of the simplified program option, nine states reported using some of the flexibility offered under the program. Eight states are aligning their food stamp and TANF work requirements. One state is aligning its TANF and food stamp reporting requirements to reduce the reporting burden on households participating in both programs. We provided USDA with the opportunity to comment on a draft of this report. While USDA did not provide formal comments, it did provide technical comments, which we incorporated where appropriate. We are sending copies of this report to the Secretary of Agriculture, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. If you or your staff have questions about this report, please contact me on (202) 512-7215 or Dianne Blank on (202) 512-5654. Individuals making key contributions to this report include Margaret Boeckmann, Elizabeth Morrison, and Lara Carreon. U.S. General Accounting Office. Food Stamp Program: Implementation of Electronic Benefit Transfer System. GAO-02-332. Washington, D.C.: 2002. U.S. General Accounting Office. Food Stamp Program: Program Integrity and Participation Challenges. GAO-01-881T. Washington, D.C.: 2001. U.S. General Accounting Office. Food Stamp Program: States Seek to Reduce Payment Error and Program Complexity. GAO-01-272. Washington, D.C.: 2001. U.S. General Accounting Office. Welfare Reform: Few States are Likely to Use the Simplified Food Stamp Program. GAO/RCED-99-43. Washington, D.C.: 2001. U.S. General Accounting Office. Food Stamp Program: Various Factors Have Led to Declining Participation. GAO/RCED-99-185. Washington, D.C.: 2002).
To help states administer their Food Stamp Programs, the Food and Nutrition Service (FNS) offers options and waivers to their program rules and regulations. Almost all states used options or waivers in their food stamp eligibility determination process. More than half of the states chose to make households receiving Temporary Assistance for Needy Families (TANF) services automatically eligible for food stamps. Thirty-three states exempted some or all vehicles in the determination of food stamp eligibility. Although most states used these options and waivers, they considered them a cumbersome way to increase access to the program for families owning a vehicle. Almost all states used at least one option or waiver to change the reporting methods required of food stamp household earnings. The most frequently used reporting waivers exempted recipients from reporting changes in earned income of $25 or more per month. States used these options and waivers to simplify paperwork requirements for both the food stamp recipient and eligibility worker. Although few states were using the new option to provide food stamp benefits to families leaving TANF, 20 other states planned to implement the option. No state was implementing or planning to implement all aspects of the simplified program option, which allows states to merge their TANF and Food Stamp Program for families receiving both types of assistance. States told GAO that the simplified program option would make administering the programs more difficult because it creates a separate program, covering only a subset of food stamp recipients. However, nine states were using a portion of the simplified program to align their food stamp and TANF work or reporting requirements.
Consumers may access location-based services through smartphones or from in-car location-based services. Four types of companies are primarily responsible for smartphone products and services in the United States: mobile carriers, such as AT&T and Verizon; developers of operating systems, such as Apple’s iPhone iOS and Google’s Android; manufacturers, such as HTC and Samsung; and developers of applications such as games like Angry Birds, social networking applications like Facebook, or navigation tools like Google Maps. We refer to these companies as mobile industry companies. In-car location- based services are delivered by in-car communications systems known as “telematics” systems, portable navigation devices, and map and navigation applications for mobile devices. Companies can obtain location data in various ways. Mobile devices and in-car navigation devices determine location information through methods such as cell tower signal-based technologies, Wi-Fi Internet access point technology, crowd-sourced positioning, and GPS technology. Assisted- GPS (A-GPS), a hybrid technology that uses more than one data collection methodology, is also widely used. For example, companies such as Google and Apple use customer data to compile large databases of cell tower and Wi-Fi access points. Non-carriers use these crowd- sourced location maps to determine location by analyzing which cell tower and Wi-Fi signals are received by a device. Consumers’ location data are transmitted over the cellular network or Wi-Fi access points to companies providing the services. These location data may then be shared with third parties for various uses. For example, companies may choose to partner with third parties to provide a specific location-based service, such as real-time traffic information. Several agencies have responsibility to address consumers’ privacy and create related guidance. The Federal Trade Commission (FTC) has authority to enforce action against unfair or deceptive acts or practices of companies; the Federal Communications Commission (FCC) has regulatory and enforcement authority over mobile carriers; and the Department of Commerce’s (Commerce) National Telecommunications and Information Administration (NTIA) advises the President on telecommunications and information policy issues. Additionally, the Department of Justice disseminates guidance on how law enforcement might request electronic evidence, such as a person’s current or historical location data. Representatives from mobile industry companies we spoke to for the September 2012 report and in-car navigation service companies we spoke to for the December 2013 report told us they primarily collect and share location data to provide location services and to improve those services. Mobile carriers and application developers offer a diverse array of services that use location information, such as services providing navigation and social networking services that are linked to consumers’ locations. To provide these services, carriers and developers need to quickly and accurately determine location. Location data can also be used to enhance the functionality of other services that do not need to know the consumer’s location to operate. Search engines, for example, can use location data as a frame of reference to return results that might be more relevant. For instance, if a consumer were to search for a pizza restaurant using a location-aware search engine, the top result may be a map of nearby pizza restaurants instead of the homepage of a national chain. In- car location services use location data to provide services such as turn- by-turn directions or roadside assistance. Representatives from both mobile industry companies and in-car navigation services companies told us they also use location data to improve the accuracy of their services. Representatives from some in-car navigation service companies said they share aggregated location data associated with traffic flows with third parties to augment and improve the accuracy of real-time traffic services provided to consumers. Additionally, as we reported in 2012, mobile industry companies can use and sell location data to target the advertising that consumers receive through mobile devices. Doing so may make an advertisement more relevant to a consumer than a non-targeted advertisement, boosting advertising revenue. Advertising is particularly important to application developers, as many developers give their products away and rely on advertising to consumers through free applications for revenue. Companies may also aggregate and store individual consumer data to create consumer profiles. Profiles can be used to tailor marketing or service performance to an individual’s preferences. Mobile industry companies and providers of in-car location services must also share consumer location data if a court finds that the information is warranted for law enforcement purposes. Because consumers generally carry their mobile devices with them, law enforcement can use device location data to determine the consumer’s location. Because of this correlation, location data are valuable to law enforcement for tracking the movements of criminal suspects. Mobile carriers must comply with court orders directing the disclosure of historical location data (i.e., where the device was in the past) and in certain circumstances, real-time location data (i.e., where the device is now). Although consumers can benefit from location-based services designed to make their lives easier, consumers also expose themselves to privacy risks when they allow companies to access their location data. In some cases, consumers of location-based services may be unaware that companies share their location data for purposes other than providing those services. As we stated in our September 2012 and December 2013 reports, these privacy risks include, but are not limited to the following: Disclosure to Unknown Third Parties for Unspecified Uses: According to privacy advocates, when a consumer agrees to use a service that accesses location data, the consumer is unlikely to know how his or her location data may be used in ways beyond enabling the service itself. For example, location data may be shared with third parties unknown to the consumer. Because consumers do not know who these entities are or how they are using consumers’ data, consumers may be unable to judge whether they are disclosing their data to trustworthy entities. Third parties that receive shared location information may vary in the levels of security protection they provide. If any of these entities has weak system protections, there is an increased likelihood that the information may be compromised. Tracking Consumer Behavior: When location data are collected and shared, these data could be used in ways consumers did not intend, such as to track their travel patterns or to target consumers for unwanted marketing solicitations. Since consumers often carry their mobile devices with them and can use them for various purposes, location data along with data collected on the device may be used to form a comprehensive record upon which an individual’s activities may be inferred. Amassing such data over time allows companies to create a richly detailed profile of individual behavior, including habits, preferences, and routines—private information that could be exploited. Consumers may believe that using these personal profiles for purposes other than providing a location-based service constitutes an invasion of privacy, particularly if the data are used contrary to consumers’ expectations and results in unwanted solicitations or other nuisances. Identity Theft: Criminals can use location data to steal identities when location data are disclosed, particularly when they are combined with other personal information. The risk of identity theft grows whenever entities begin to collect data profiles, especially if the information is not maintained securely. By illicitly gaining access to these profiles, criminals acquire information such as a consumer’s name, address, interests, and friends’ and co-workers’ names. In addition, a combination of data elements—even elements that do not by themselves identify anyone, such as individual points of location data—could potentially be used in aggregate to identify or infer a consumer’s behavior or patterns. Such information could be used to discern the identity of an individual. Furthermore, keeping data long-term, particularly if it is in an identifiable profile, increases the likelihood of identity theft. Personal Security: Location data may be used to form a comprehensive record of an individual’s movements and activities. If disclosed or posted, location data may be used by criminals to identify an individual’s present or probable future location, particularly if the data also contain other personally identifiable information. This knowledge may then be used to harm the individual or his property through, for instance, stalking or theft. Access to location information also raises child safety concerns as more children access mobile devices and location-based services. According to the American Civil Liberties Union (ACLU), location updates that consumers provide through social media have been linked to robberies, and GPS technology has been involved in stalking cases. Surveillance: Law enforcement agencies can obtain location data through various methods, such as a court order, and such data can be used as evidence. However, according to a report by the ACLU, law enforcement agents could potentially track innocent people, such as those who happened to be in the vicinity of a crime or disturbance. Consumers generally do not know when law enforcement agencies access their location data. In addition to information related to a crime, the location data collected by law enforcement may reveal potentially sensitive destinations, such as medical clinics, religious institutions, courts, political rallies, or union meetings. Industry and privacy advocacy groups have recommended practices for companies to follow in order to better protect consumers’ privacy while using their personal information. These recommended practices include: (1) providing disclosures to consumers about data collection, use, and sharing; (2) obtaining consent and providing controls over location data; (3) having data retention practices and safeguards; and (4) providing accountability for protecting consumers’ data. For the September 2012 report, we examined 14 mobile industry companies, and the for December 2013 report, we examined 10 in-car navigation services companies. These companies have taken steps that are consistent with some, but not all, of the recommended practices: Disclosures: All of the companies we examined for both reports have privacy policies, terms-of-service agreements, or other practices—such as on-screen notifications—to notify consumers that they collect location data and other personal information. However, some companies have not consistently or clearly disclosed to consumers what they are doing with these data or which third parties they may share them with. For example, most of the in-car navigation service companies we examined for the 2013 report provide broadly worded reasons for collecting location data that potentially allow for unlimited data collection and use. One of those company’s terms of service states that the provided reasons for location data collection were not exhaustive. Furthermore, about half of the in-car navigation service companies’ disclosures allow for sharing for location data when they are de-identified, but most of these companies’ disclosures did not describe the purposes for sharing such data. Consent and Controls: All of the companies we examined for both reports indicated they obtain consumer consent to collect location data and obtain this consent in various ways, some of which are more explicit than others. Companies also reported providing methods for consumers to control collection and use of location data, but the methods and amount of control varied. For example, most of the 14 mobile industry companies we examined for the 2012 report indicated that consumers could control smartphones’ use of their location data from the phone; however, the ability to control this varied by operating system, with some providing more options. For example, the iPhone iOS operating system displays a pop-up window the first time a consumer activates a new application that includes location-based services. The pop-up states that the application seeks to use the consumer’s location and allows the consumer to accept or decline at that time. Similarly, Android smartphones notify consumers that an application will use location data at the time a consumer downloads a new application and seeks consumer consent through this process. Some in-car navigation systems we examined for the 2013 report use similar methods to notify consumers that they will collect location data to provide services. In contrast, other in- car navigation services obtain consent when a consumer purchases a vehicle. According to one privacy group we met with, if consent is obtained in this manner, consumers may not be as likely to review a company’s stated privacy practices because they may be a part of a larger set of documentation about the vehicle. Additionally, none of the 10 in-car navigation service companies we examined allow consumers to delete the location data that are, or have been, collected. Retention and Safeguards: Officials from most of the companies we interviewed for the 2012 and 2013 reports said they kept location data only as long as needed for a specific purpose; however, in some cases, this could mean keeping location data indefinitely. Most of the privacy policies of the 14 mobile services companies we examined did not state how long companies keep location data, and there was wide variation in how long in-car navigation services companies retain vehicle-specific or personally identifiable location data when a customer requests services, ranging from not at all to up to 7 years. All the mobile industry companies we examined reported ways they safeguard consumers’ personal information. However, in some cases, it was not clear whether these protections covered location data, since some privacy policies did not state whether location was considered a form of personal information. Thus it was unclear whether stated safeguards for personal information applied to location data. As we reported in 2013, companies may safeguard location data that they use or share, in part, by de-identifying them, but companies we examined used different de-identification methods. De-identified data are stripped of personally identifiable information. The de-identification method a company uses affects the extent to which consumers may be re-identified and exposed to privacy risks. Location data that are collected along with a consumer’s name or other identifying information are, by definition, personally identifiable data and present the greatest privacy risks to consumers because a consumer’s identity is known. Privacy risks decrease when companies de-identify location data, but the level of risk falls on a spectrum depending on how easy it is to re-identify consumers. For example, de-identifying location data with unique identification numbers prevents the direct association of location data with a specific vehicle or individual. However, if the same identification number is re- used for the same consumer on multiple trips, then the consumer’s history or patterns can potentially be discerned. Conversely, consumers face little to no privacy risks when location data are stripped of any identification numbers and aggregated with other consumers’ data because the data are anonymous, meaning that the data cannot be linked to an individual at all (see fig. 1). All of the in-car navigation service companies we examined stated in their disclosures, or in interviews with us, that they use or share de-identified location data. Accountability: We reported in 2012 and 2013 that companies’ accountability practices varied. For example, all 10 of the in-car navigation services companies we examined for the 2013 report stated in their disclosures or in interviews with us that they take steps to protect location data that they share with third parties. Additionally, some mobile carriers we examined for the 2012 report said they use their contracts with third parties they share consumers’ personal data with to require those third parties to adhere to industry recommended practices for location data. In the 2013 report, we found that while not disclosed to consumers, representatives of in-car navigation services companies said their employees must follow the companies’ internal polices to protect data, including location data, and some of the representatives further explained that employees who violate such policies are subject to disciplinary action and possibly termination. Separately, representatives from one of the in-car navigation service companies told us that it had conducted an independent audit of its practices to provide reasonable assurance that it was in line with company privacy policies. Additionally, three of the mobile industry companies we examined for the 2012 report had their privacy practices certified by TRUSTe, a company that helps companies address privacy issues by certifying businesses’ privacy programs. Lacking clear information about how companies use and share consumers’ location data, consumers deciding whether to allow companies to collect, use, and share data on their location would be unable to effectively judge whether their privacy might be violated. In our September 2012 report on mobile device location data, we reported that federal agencies that have responsibility for consumer data privacy protection have taken steps to promote awareness of privacy issues, such as providing educational outreach and recommending actions aimed at improving consumer privacy. For example, in February 2012, NTIA prepared a report for the White House on protecting privacy and promoting innovation in the global digital economy. The report offered a framework and expectations for companies that use personal data. The framework includes a consumer privacy bill of rights, a multistakeholder process to specify how the principles in the bill of rights apply in particular business contexts, and effective enforcement. In February 2012, FTC issued a report on privacy disclosures for mobile applications aimed at children. This report highlighted the lack of information available to parents prior to downloading mobile applications for their children and called on the mobile industry to provide greater transparency about their data practices. FTC also issued a consumer privacy report in March 2012 with recommendations for companies that collect and use consumer data, including location data. Finally, the Department of Justice has developed guidance on how law enforcement may obtain mobile location data. In our 2012 report, we concluded that NTIA and FTC could take additional actions to further protect consumers. For example, we found that NTIA had not defined specific goals, milestones, or performance measures for its proposed multistakeholder process, which consists of different groups involved with consumer privacy coming together to discuss relevant issues with the goal of developing codes of conduct for consumer privacy. Therefore, it was unclear whether the process would address location privacy. Consequently, we recommended that NTIA, in consultation with stakeholders in the multistakeholder process, develop specific goals, time frames, and performance measures for the multistakeholder process to create industry codes of conduct. In a December 2012 response to our report, the Department of Commerce (NTIA is an agency of Commerce) said it disagreed with this recommendation, stating that it is the role of the stakeholders, not the agency, to develop goals, time frames, and performance measures for the multistakeholder process. Additionally, the letter stated that stakeholders had made progress to develop their own goals, time frames, and performance measures for their efforts to create a code of conduct for mobile application transparency. We will continue to monitor NTIA’s efforts in this area. Additionally, we found that FTC had not issued comprehensive guidance to mobile industry companies with regard to actions companies should take to protect mobile location data privacy. Doing so could inform companies of FTC’s views on the appropriate actions companies should take to protect consumers’ mobile location privacy. We recommended that FTC consider issuing industry guidance establishing FTC’s views of the appropriate actions mobile industry companies could take to protect mobile location data privacy. In February 2013, FTC issued a staff report on mobile privacy disclosures; the report provided guidance for mobile industry companies to consider when disclosing their information collection and use practices. In particular, the report suggested best practices for operating systems, application developers, advertising networks and other third parties, and trade associations and other experts and researchers. For example, FTC said that operating systems should provide disclosures at the point in time when consumers access location- based services and obtain their affirmative express consent before allowing applications to access sensitive content like location data. Currently, no comprehensive federal privacy law governs the collection, use, and sale of personal information by private-sector companies; rather, various federal laws pertain to the privacy of consumers’ data: The Federal Trade Commission Act prohibits unfair or deceptive acts or practices in or affecting commerce and authorizes FTC enforcement action. This authority allows FTC to take remedial action against a company that engages in a practice that FTC has found is unfair or deceives customers. For example, FTC could take action against a company if it found the company was not adhering to the practices to protect a consumer’s personal information that the company claimed to abide by in its privacy policy. The Electronic Communications Privacy Act of 1986 (ECPA), as amended, sets out requirements under which the government and providers of electronic communications can access and share the content of a consumer’s electronic communications. ECPA also prohibits providers of electronic communications from voluntarily disclosing customer records to government entities, with certain exceptions, but companies may disclose such records to a person other than government entities. The act does not specifically address whether location data are considered content or part of consumers’ records. Some privacy groups have stated that ECPA should specifically address the protection of location data. The act also provides legal procedures for obtaining court orders to acquire information relevant to a law enforcement inquiry. The Communications Act of 1934 (Communications Act), as amended, imposes a duty on telecommunications carriers to secure information and imposes particular requirements for protecting information identified as customer proprietary network information (CPNI), including the location of customers when they make calls.The Communications Act requires that companies obtain express authorization from consumers before they access or disclose call location information, subject to certain exceptions. Carriers must also comply with FCC rules implementing the E911 requirements of the Wireless Communications and Public Safety Act of 1999, including providing location information to emergency responders when mobile phone consumers dial 911. We have previously concluded that the current privacy framework warrants reconsideration in relation to a number of issues. In our 2013 report on consumer data collected and shared by information resellers, we found that changes in technology and the marketplace have vastly increased the amount and nature of personal information, including location data that are collected, used, and shared. We reported that while some stakeholders’ views differed, the current statutory framework does not fully address these changes. Moreover, we reported that while current laws protect privacy interests in specific sectors and for specific uses, consumers have little control over how their information is collected, used, and shared with third parties. This includes consumers’ ability to access, correct, and control their personal information used for marketing, such as location data, and privacy controls related to the use of new technologies and applications, such as mobile and in-car navigation devices. In 2012, FTC and NTIA called on Congress to pass data privacy legislation that would provide a minimum level of protection for consumer data, including location data. Some Members of Congress have introduced legislative proposals that address the privacy of consumers’ location data. Chairman Franken, Ranking Member Flake, and Members of the Subcommittee, this concludes my prepared remarks. I am happy to respond to any questions that you or other Members of the Subcommittee may have at this time. For questions about this statement, please contact Mark L. Goldstein, Director, Physical Infrastructure Issues, at (202) 512-2834 or goldsteinm@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Andrew Von Ah (Assistant Director), Michael Clements, Roshni Davé, Colin Fallon, Andrew Huddleston, Lori Rectanus, and Crystal Wesco. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Smartphones and in-car navigation systems give consumers access to useful location-based services, such as mapping services. However, questions about privacy can arise if companies use or share consumers' location data without their knowledge. Several agencies have responsibility to address consumers' privacy issues, including FTC, which has authority to take enforcement actions against unfair or deceptive acts or practices, and NTIA, which advises the President on telecommunications and information policy issues. This testimony addresses (1) companies' use and sharing of consumers' location data, (2) consumers' location privacy risks, and (3) actions taken by selected companies and federal agencies to protect consumers' location privacy. This testimony is based on GAO's September 2012 and December 2013 reports on mobile device location data and in-car location-based services and December 2012 and May 2013 updates from FTC and NTIA on their actions to respond to the 2012 report's recommendations. Fourteen mobile industry companies and 10 in-car navigation providers that GAO examined in its 2012 and 2013 reports—including mobile carriers and auto manufacturers with the largest market share and popular application developers—collect location data and use or share them to provide consumers with location-based services and improve consumer services. For example, mobile carriers and application developers use location data to provide social networking services that are linked to consumers' locations. In-car navigation services use location data to provide services such as turn-by-turn directions or roadside assistance. Location data can also be used and shared to enhance the functionality of other services, such as search engines, to make search results more relevant by, for example, returning results of nearby businesses. While consumers can benefit from location-based services, their privacy may be at risk when companies collect and share location data. For example, in both reports, GAO found that when consumers are unaware their location data are shared and for what purpose data might be shared, they may be unable to judge whether location data are shared with trustworthy third parties. Furthermore, when location data are amassed over time, they can create a detailed profile of individual behavior, including habits, preferences, and routes traveled—private information that could be exploited. Additionally, consumers could be at higher risk of identity theft or threats to personal safety when companies retain location data for long periods or in a way that links the data to individual consumers. Companies can anonymize location data that they use or share, in part, by removing personally identifying information; however, in its 2013 report, GAO found that in-car navigation providers that GAO examined use different de-identification methods that may lead to varying levels of protection for consumers. Companies GAO examined in both reports have not consistently implemented practices to protect consumers' location privacy. The companies have taken some steps that align with recommended practices for better protecting consumers' privacy. For example, all of the companies examined in both reports used privacy policies or other disclosures to inform consumers about the collection of location data and other information. However, companies did not consistently or clearly disclose to consumers what the companies do with these data or the third parties with which they might share the data, leaving consumers unable to effectively judge whether such uses of their location data might violate their privacy. In its 2012 report, GAO found that federal agencies have taken steps to address location data privacy through educational outreach events, reports with recommendations to protect consumer privacy, and guidance for industry. For example, the Department of Commerce's National Telecommunications and Information Administration (NTIA) brought stakeholders together to develop codes of conduct for industry, but GAO found this effort lacked specific goals, milestones, and performance measures, making it unclear whether the effort would address location privacy. Additionally, in response to a recommendation in GAO's 2012 report, the Federal Trade Commission (FTC) issued guidance in 2013 to inform companies of the Commission's views on the appropriate actions mobile industry companies should take to disclose their privacy practices and obtain consumers' consent. GAO made recommendations to enhance consumer protections in its 2012 report. GAO recommended, for example, that NTIA develop goals, milestones, and measures for its stakeholder initiative. NTIA stated that taking such actions is the role of the stakeholders and that its stakeholders had made progress in setting goals, milestones, and performance measures. GAO will continue to monitor this effort.
Insurance can spread risk over time, across geographical areas, and among industries and individuals. While private insurers assume some financial risk when they write policies, they employ various strategies to manage risk so that they limit potential financial exposures, earn profits, and build capital needed to pay claims. For example, insurers charge premiums for coverage and establish underwriting standards, such as refusing to insure customers who pose unacceptable levels of risk, or limiting coverage in particular geographic areas. Insurance companies may also purchase insurance from reinsurers to cover specific portions of their financial risk. Reinsurers use similar strategies to limit their risks, including charging premiums and establishing underwriting standards. States regulate private insurance and may impose various restrictions on insurers’ risk management practices, such as premium rate increases and coverage limitations, to protect consumers and to ensure insurer solvency. However, many of these risk management strategies are not available to NFIP, which is required to assume and retain all of the risk and to generally accept all insurance applicants, including those with potentially high-risk properties. The uncertain and potentially large losses associated with weather- related events are among the biggest risks that property insurers face. Virtually anything that is insured—property, crops and livestock, business operations, or human life and health—is vulnerable to weather-related events. For insurers, remaining financially solvent generally involves estimating and setting rates that reflect insured risks; any unanticipated changes in the frequency or severity of weather-related events can have financial consequences for them. Recent scientific assessments have found that climate change has or will alter the frequency and/or severity of damaging weather-related events—such as droughts and floods, alter crop productivity, and threaten coastal communities as sea-level rises. Under certain circumstances, the private sector may consider some risks uninsurable. In other instances, the private sector may offer to insure a risk, but at rates that many property owners cannot afford. Without insurance, affected property owners must rely on their own resources or seek disaster assistance from local, state, and federal sources in the event of a loss. In situations where the private sector will not insure a particular type of risk, the public sector may create markets to ensure the availability of insurance. For example, several states have established Fair Access to Insurance Requirements (FAIR) and windstorm plans, that pool resources from insurers doing business in the state to make insurance available to property owners who either cannot obtain coverage in the private insurance market or cannot do so at an affordable rate. Similarly, at the federal level, the NFIP and the federal crop insurance program were established to provide coverage where private markets did not exist, and partly to provide an alternative to disaster assistance. NFIP has three components: (1) the provision of flood insurance; (2) the requirement that participating communities adopt and enforce floodplain management regulations that are at least as stringent as FEMA’s national minimum standards; and (3) the identification and mapping of floodplains, which helps to determine which insurance premiums and regulations apply to a particular property. Community participation in NFIP is voluntary. However, communities must join NFIP and adopt FEMA- approved building standards and floodplain management strategies in order for their residents to purchase flood insurance through the program. In addition to meeting these federal standards, the regulations that each community enacts and implements must meet the minimum state requirements, which are established consistent with NFIP standards. For example, communities must adopt at least the minimum standards for floodplain management regulations, including building requirements to reduce future flood damage, such as requiring new and substantially improved or substantially damaged structures in special flood hazard areas to be elevated to or above base flood elevation levels. Additionally, FEMA provides premium reduction incentives for policyholders within communities that take measures to mitigate flood risk beyond NFIP minimum requirements through the agency’s program, the Community Rating System. Under NFIP, the federal government assumes the liability for covered losses and sets rates and coverage limitations. While NFIP does not subsidize most policies, policyholders with certain types of insured properties pay subsidized premiums. For crop insurance, farmers participate voluntarily, but the federal government encourages participation by subsidizing their insurance premiums. USDA’s RMA administers the federal crop insurance program, including issuing new insurance products and expanding existing products to new geographic regions. RMA administers the program in partnership with private insurance companies that share a percentage of the risk of loss or the opportunity for gain associated with each policy. Federal law prohibits crop insurance from covering losses due to a farmer’s failure to follow good farming practices. Good farming practices are identified by agricultural experts in a given area and provide acceptable farming methods for crop insurance policyholders to use in producing yields consistent with historical production. Agricultural experts are individuals employed by USDA’s Cooperative Extension System or the agricultural departments of universities, or other approved persons, whose research or occupation is related to the specific crop or practice for which expertise is sought. The federal government’s disaster relief and flood and crop insurance programs create fiscal exposure to weather-related events and climate change. For the purposes of this report, fiscal exposures are responsibilities, programs, and activities that may either obligate the federal government to future spending or create the expectation for future spending. For example, the government’s response to a weather-related event or series of events can strengthen expectations that the government will respond in the same way to similar events in the future. Differences exist in the types of property that public insurers insure, as well as their time horizons. For example, federal flood and crop insurance programs insure different types of property, with different time horizons. While state insurers and NFIP insure permanent structures that are designed to last years or decades, federal crop insurance primarily covers agricultural commodities that farmers plant and harvest each year. This annual cycle can allow farmers to adapt their insured property to changes in weather-related risk more easily than state or NFIP policyholders can adapt their permanent structures to changing wind or flood risks. Uncertainty about the magnitude, timing, and extent of the effects of climate change in the future presents challenges to both public and private insurers. For example, NFIP will likely be affected by future sea- level rise. According to the May 2014 National Climate Assessment, although global mean sea level may increase anywhere from 8 inches to 7 feet in this century, the magnitude of the projected future sea-level rise varies considerably along the U.S. coastline due to a variety of factors, such as land subsidence and uplift. Officials from public insurers and some representatives from private industry stated that uncertainty regarding climate projections presents challenges to these insurers’ ability to plan for the future. However, as stated in a 2010 National Research Council (NRC) report, even though uncertainties exist regarding the exact nature and magnitude of impacts, mobilizing now to increase the nation’s resilience can be an insurance policy against future climate change risks. We define resilience as the ability to prepare and plan for, absorb, recover from, and more successfully adapt to actual or potential adverse events. Hazard mitigation—actions that reduce the long-term risks of life and property by lessening the impact of disasters—and climate change adaptation—the adjustments to natural or human systems in response to actual or expected climate change—promote resilience to extreme weather events, among other things. Recent executive orders have addressed the topics of vulnerabilities to extreme weather events and climate change-related risks. Executive Order 13632, which was signed in December 2012, established the Hurricane Sandy Rebuilding Task Force to, among other things, assess current vulnerabilities to extreme weather. The task force was also to identify opportunities for achieving rebuilding success and improving the affected region’s resilience, consistent with the National Disaster Recovery Framework’s commitment to support economic vitality, enhance public health and safety, protect and enhance natural and man-made infrastructure, and ensure appropriate accountability. Executive Order 13653, which was signed in November 2013, directs federal agencies to, consistent with their missions, (1) address barriers to the nation’s resilience to climate change; (2) reform policies that may, perhaps unintentionally, increase the vulnerability of natural or built systems, economic sectors, natural resources, or communities to climate change; and (3) identify opportunities to support and encourage smarter, more climate-resilient investments. Growing federal and private sector exposure since our 2007 report on flood and crop insurance has increased insured and uninsured losses to date, and climate change and related increases in the frequency and severity of extreme weather events may further increase such losses in coming decades. Federal and private sector exposure to potential insured losses grew since 2007. Specifically, inflation-adjusted federal exposure to potential insured losses grew from $1.3 trillion to $1.4 trillion (8 percent) from 2007 through 2013. According to our analysis of the most recent data available, private sector exposure grew from an estimated $60.7 trillion to $66.5 trillion (10 percent) from 2007 through 2012, in 2014 dollars. Property insured under the NFIP comprised 91 percent of federal exposure to insured loss in 2013, but it grew the least (4 percent) from 2007 to 2013. Property insured under the federal crop insurance program accounted for 9 percent of total federal exposure to insured loss in 2013, but it grew the most (68 percent) since 2007, as shown in table 1. For historical context, in comparing the 7-year period shown in the table to the preceding 7-year period (2000 to 2006), NFIP’s exposure to loss grew more slowly from 2007 to 2013, and the federal crop insurance program’s exposure grew more quickly. From 2000 to 2006, inflation- adjusted NFIP exposure grew from $749.2 billion to $1.20 trillion (60 percent), compared with 4 percent from 2007 to 2013. In contrast, from 2000 to 2006, the federal crop insurance program’s exposure grew from $45.5 billion to $56.9 billion (25 percent) compared with 68 percent from 2007 to 2013, adjusted for inflation. Also, as a share of total federal exposure, in 2006, federal crop insurance comprised 5 percent of total federal exposure, compared to 9 percent in 2013. Disaster relief appropriations—which could be considered a proxy for federal exposure to uninsured losses—also grew. Based on a 2013 analysis of disaster relief appropriations by the Congressional Research Service, the amount of inflation-adjusted disaster relief per fiscal year increased from a median of $6.2 billion for the years 2000 to 2006, to a median of $9.1 billion for the fiscal years 2007 to 2013 (46 percent). Although federal exposure spans all 50 states, it is concentrated in certain parts of the country, such as the Mississippi River Basin, California’s central valley, and coastal areas, as shown in figure 1. Federal and private insured exposure to loss grew for a variety of reasons, including increases in the value of property insured and increases in the amount of coverage written. For example, according to data that the National Oceanic and Atmospheric Administration (NOAA) obtained from the U.S. Census Bureau, the U.S. coastal population grew by 39 percent from 1970 to 2010, and population density in coastal areas is six times greater than inland areas. Furthermore, a study by an insurance industry modeling firm found that the total value of property in U.S. East Coast and Gulf Coast areas grew by just under 4 percent each year since 2007 to over $10 trillion in 2012, in nominal dollars. Some studies and our prior work, suggest that the current level of federal fiscal exposure to losses may become increasingly difficult to sustain in coming decades, given these socioeconomic factors and other budget constraints. The 20 scientific and industry studies we reviewed that examined the historical loss record generally found that the exposure growth in hazard- prone areas has largely driven increased insured and uninsured losses to date. Specifically, most loss analyses we reviewed that found an upward trend, identified socioeconomic factors, such as growth in population and the value of property, as the primary drivers of increasing losses to date. Most studies we reviewed did not find a statistically significant increase in such losses conclusively attributable to climate change. One assessment of loss studies noted that climate change cannot be ruled out as a factor because of limitations in data quality, different methods of correcting for socioeconomic trends, and changes in insurance coverage. Although most of the studies we reviewed did not find a clear climate change signal in historical losses, they noted that climate change may start to affect losses in the near future. Recent assessments of loss projections for certain weather events suggest that climate change may increase losses substantially by 2040, and potentially double annual losses by 2100, compounding existing loss trends. For example, our analysis of 20 scientific studies shows a wide range of projections that, on average, predict a 14 to 47 percent increase in inflation-adjusted U.S. hurricane-related losses—which significantly contribute to total U.S. losses—attributable to changes in the severity of storms by 2040, and a 54 to 110 percent increase in losses by 2100, as shown in table 2. With exposure projected to increase over the same period, annual insured and uninsured losses could be much higher by 2100 based on some of the studies we reviewed. Specifically, one peer-reviewed study projected property exposure in hazard-prone areas to more than double losses by 2100. When combined with the range of climate change projections, total losses could increase anywhere from about 50 to 340 percent by 2100. For agriculture, climate disruptions to production have increased in the past 40 years, and the May 2014 National Climate Assessment projects such disruptions will increase over the next 25 years. According to this report, producers have many available strategies for adapting to the average temperature and precipitation changes projected for the next 25 years, including technological advancements, expansion of irrigated acreage, and regional shifts in crop production, among others. However, according to the report, by midcentury, when temperatures could increase by between 1.8°F and 5.4°F and precipitation extremes could further intensify, expected yields of major U.S. crops and farm profits could decline, even with the current pace of technological advances and geographic shifts. Public insurers have commissioned climate change studies, incorporated climate change adaptation into their planning, and taken other steps to better understand and prepare for climate change’s potential effects. However, inherent challenges of federal insurance programs, such as how federal insurers can address policyholders’ long-term risk given the short-term focus of insurance contracts, may impede NFIP and RMA’s ability to minimize long-term federal exposure to climate change. Public insurers have begun taking steps to understand climate change risks, such as sea-level rise, and have identified actions that could manage their exposure to climate change’s effects. For example, federal insurers commissioned climate change studies and modeling to better understand the long-term implications of climate change for their programs. Regarding flood insurance, a FEMA-commissioned June 2013 study found that sea-level rise, erosion, and other changes in coming decades will affect the NFIP by expanding areas prone to flooding and requiring premium increases to cover higher losses. The report also found that small annual rate increases could allow NFIP to adjust to gradual climate change effects. Based on this report’s findings, FEMA initiated two pilot studies to analyze sea-level rise and its impacts in special flood hazard areas. FEMA appointed members to a Technical Mapping Advisory Council to, among other things, develop recommendations to ensure FEMA uses the best available methodology to consider future development on flood risk as required by the Biggert- Waters Flood Insurance Reform Act of 2012 (Biggert-Waters Act). According to FEMA officials, the council expects to release its flood mapping report in September 2015. Regarding crop insurance, RMA’s 2009 study found that the crop insurance program has self-correcting features, which allow RMA to manage its exposure to gradual changes in climate. For example, farmers’ annual production history determines their coverage and, if yields decrease under climate change, the program’s exposure adjusts downward as well. RMA may also adjust the availability of coverage for crops to respond to geographic shifts in production as conditions become more or less favorable for certain crops. According to RMA officials, farmers’ desire to maximize their profits and maintain their businesses will motivate them to alter their production practices as climate change effects occur. However, the agency’s 2009 study also recommended that RMA should better monitor weather and climate to understand potential rapid changes in future conditions and how to adapt agricultural and risk management practices to address climate change. RMA entered into a cooperative agreement with Oregon State University’s Parameter- elevation Regressions on Independent Slopes Model (PRISM), Climate and Weather Group that will enhance RMA’s monitoring of weather and climate. Representatives from PRISM and RMA said the two organizations have engaged in a long-term program to develop a 100- year historical weather time series that could help analyze climate change’s effect on agriculture. Furthermore, in response to recent executive orders, FEMA and RMA have developed climate change adaptation plans, which outline planned agency actions to manage significant climate change related risks to, and vulnerabilities in, agency operations and missions. For example, according to its climate change adaptation policy statement, FEMA will continue its study of climate change impacts on NFIP. Furthermore, according to program officials, climate change efforts will incorporate the best climate science available into flood maps—which form the basis for identifying property owners’ flood risk and providing guidance to communities on land-use decisions. According to its adaptation plan, RMA will update program parameters, such as the earliest and final planting dates. Changing planting dates, as needed, can help farmers avoid exposing crops to new changes in weather or climate. RMA officials told us that they also recently worked with USDA’s Natural Resources Conservation Service to ensure consistency between crop insurance and conservation programs that protect soil and water resources. RMA also participates in the broader USDA climate change effort, such as USDA’s Global Change Task Force, according to RMA and USDA officials. This task force coordinates USDA activities related to climate change and provides a venue for sharing information within the agency, according to an agency document. In addition, USDA’s Agricultural Research Service (ARS), has identified several climate change adaptation strategies to promote long-term resilience to climate change effects such as increased soil erosion and water scarcity. Specifically, a 2013 ARS report found that resilient agricultural practices such as conservation tillage—where farmers leave some crop residue on fields— and water conservation will help minimize climate change costs and sustain agricultural production in a changing climate. According to USDA officials, the agency also plans to develop region-specific adaptation strategies through its newly established Regional Climate Hubs, which are to deliver science-based, practical information to farmers and to support decision making related to mitigation of, and adaptation to, climate change. Additionally, for a variety of reasons, federal insurers have adjusted their rate-setting calculations in ways that may better position them to respond to climate change. Although not specific to climate change, the Biggert- Waters Act requires NFIP to establish a reserve to help meet expected future obligations and to establish standards that ensure that its flood maps’ flood risk determinations are adequate, both of which should help the agency collect funds that more closely match the risks it incurs or that are more likely to be sufficient to cover losses. FEMA officials said the agency has also increased the reserve amount in premiums to cover flood risk uncertainty, to better reflect the current flood risk associated with properties below base flood elevation. Recently enacted legislation required FEMA to phase out subsidies for some properties such as nonprimary residences and business structures through 25 percent annual premium increases until the full-risk rate is reached. In response to a 2011 rate study, RMA changed its rate calculations to more heavily weigh recent weather data, which RMA documents suggest could enable premiums to more quickly and fully reflect changes in the climate. By law, RMA must set crop insurance premiums at rates sufficient to cover projected claims. Due in part to weather variability, in some years, RMA will likely collect more premiums than it pays in claims, and vice versa. RMA currently uses an average of the previous 20 years of yields to calculate its rates, rather than the previous method of assigning equal weight to all years back to the fixed base year of 1975— which was in place prior to 2012. Moreover, FEMA, in conjunction with other federal agencies, has taken some recent steps to manage future risks related to climate change for disaster relief. For example, the Hurricane Sandy Rebuilding Task Force implemented a minimum flood risk reduction standard for Sandy-related disaster funding to account for future sea-level rise in response to Executive Order 13632. Under this standard, structures repaired or rebuilt must meet forward-looking standards, such as elevating the ground floor of a building 1 foot higher than existing FEMA standards. In addition, according to FEMA officials, a current interagency effort seeks to develop a Federal Flood Risk Reduction Standard that would apply to future disaster relief appropriations—although it is too early to know whether such a standard will incorporate future risk. Furthermore, according to a July 2014 White House statement, FEMA will issue new guidance that requires states to incorporate climate change into their hazard mitigation plans as a condition for receiving disaster relief. FEMA’s Hazard Mitigation Assistance programs and post-disaster grants currently do not require grantees to incorporate sea-level rise into their cost-benefit calculations for proposed projects, although they do allow it. Regarding other public insurance programs, at the state level, some state insurers in hazard-prone areas have transferred risk to the private sector to reduce their exposure to claims. Specifically, officials at two of the state insurers we interviewed told us they have sold policies (“depopulation”), bought reinsurance to transfer a portion of the risk in their portfolio to another insurer, and sold bonds that provide funding to the insurer should a catastrophic weather event occur (“catastrophe bonds”). These risk transfer methods reduce exposure to losses from extreme weather. Reinsurance and catastrophe bonds are short-term, market-based risk transfer methods, typically of 1 to 3 years in duration. We previously concluded that there are several strategies—many which would require statutory authority to implement but a few that FEMA currently has authority to implement—to be considered at the federal level to allow the transfer of risk to the private sector. The Biggert-Waters Act also required FEMA to issue a report in 2013 that assessed the capacity of the private reinsurance, capital, and financial markets to assume a portion of the insurance risk of NFIP. Federal insurers face two main program challenges that may constrain their ability to manage their fiscal exposure and address future climate change risk. First, federal law encourages federal insurers, such as FEMA and RMA, to provide affordable insurance to policyholders through subsidized rates, which lessens the agencies’ ability to collect sufficient premiums from policyholders to pay claims, increases the federal government’s fiscal exposure, and may reduce policyholders’ incentives to manage risk by giving them inaccurate signals about the level of risk. Specifically, federal insurers face a challenge between providing affordable premiums through subsidies and managing financially self- sufficient programs by charging policyholders full-risk premiums. Additionally, while insurers, in general, communicate the risk of incurring losses to policyholders through their premium rates, by subsidizing some policies, federal insurers have not provided all policyholders with accurate price signals about their risk of incurring losses. As a result, some NFIP and federal crop insurance policyholders may perceive their risk of loss to be lower than it really is and may have less financial incentive to reduce this risk. For example, FEMA offers subsidized premium rates for policies covering certain structures, some of which are in high-risk areas. As a result of NFIP’s historical rate structure, the program has generated sufficient premiums to cover claims in years with average losses but has not had sufficient funds to cover claims in catastrophic loss years, and FEMA has an outstanding balance with the U.S. Treasury of $24 billion to pay for claims in these years. Although FEMA is phasing out most subsidies, they may remain in place for many years, and while they are in place, newly purchased residential properties may qualify for subsidized NFIP rates. As long as some NFIP policyholders only pay for a portion of their risk of losses due to subsidized premiums, they receive inaccurate price signals about their property’s full flood risk—regardless of what other information FEMA may provide. Consequently, policyholders who receive subsidized rates may have limited financial incentive to take steps, such as floodproofing their homes above base flood elevation, to reduce their risk. For federal crop insurance, although RMA is required to collect sufficient premiums to cover projected claims, the premiums are subsidized by the federal government. For example, we previously found that the government paid about 62 percent of premium costs for the program in 2011. The government continued to pay 62 percent of premium costs in 2013, which totaled about $7.3 billion. Also, in August 2014, we found that the costs of federal crop insurance have grown, primarily due to an increase in the value of premium subsidies. According to an April 2014 Congressional Budget Office report, crop insurance program costs are expected to average $8.9 billion annually, for fiscal years 2014 through 2023. Similar to flood insurance, federal crop insurance policyholders receive inaccurate price signals about their potential risk of loss when they receive such premium subsidies. Although farmers are informed of the subsidy amount and have an incentive to maximize their annual yields, they do not bear the true cost of their risk of loss due to weather- related events, such as drought—which could affect their farming decisions. In prior work, we concluded that reducing subsidies and charging full-risk premiums to individual policyholders would decrease the federal government’s fiscal exposure under the flood and crop insurance programs. Reducing the federal government’s fiscal exposure to losses under federal insurance programs and sending more accurate signals to policyholders about their risk, become even more important as the risks from climate change and related extreme weather events increase. Second, given the short-term nature of insurance, public insurers face a challenge in encouraging their policyholders to reduce their long-term exposure to climate change risks. Property insurance contracts typically estimate and communicate risk of property losses for the 1-year term of a policy. However, climate change effects on insured property, such as buildings or fields in production, may span decades. Because FEMA and RMA only provide policyholders with price signals for expected losses in the upcoming year, these policyholders may not be encouraged to reduce long-term risks for their property. Similarly, in agriculture, the tension between the short- and long-term extends beyond insurance into farming operations. According to RMA officials, farmers make annual decisions on the starting times and types of crops they plant and buy seed for the following year. They focus more heavily on this short-term time horizon, in part because seed and farming technology constantly change. Furthermore, building to community standards that are identical to existing NFIP standards—which are based on near-term flood risk—may unintentionally increase policyholders’ long-term vulnerability to climate change as sea-level rises or erosion increases properties’ flood risk— which does not reflect the direction contained in Executive Order 13653. This is because, in the short-term, policyholders may be less likely to consider taking actions to protect their homes or localities against future risk and may continue to build in high-risk areas. According to FEMA guidance, the NFIP standards are periodically revised to incorporate new regulations or clarify old ones. Communities in turn must then update their own regulations to maintain consistency with the NFIP standards. Without incorporating forward-looking minimum standards into NFIP’s requirements, similar to the minimum standard applied by the Hurricane Sandy Rebuilding Task Force, NFIP policyholders and localities may continue to build and rebuild structures to current community standards that may not reflect the changing weather-related risks faced over structures’ designed life spans—thereby exacerbating the federal financial risk to climate change. Regarding the federal crop insurance program, federal law prohibits crop insurance from covering losses due to farmers’ failure to follow good farming practices. RMA’s good farming practices provide acceptable farming methods for crop insurance policyholders to use in producing yields consistent with historical production. However, these practices are focused on maintaining historic crop yields over the term of the annual insurance contract, and some of these practices may unintentionally increase the vulnerability of agriculture to climate change, contrary to Executive Order 13653’s directive for agencies to manage vulnerabilities to climate change. For example, certain practices, such as conventional tillage and traditional irrigation methods, may maintain historic crop yields in the short-term, but they may inadvertently reduce agriculture’s long- term resilience through increased erosion, depleted soil quality, and inefficient water use. RMA officials said that crop insurance adjusts to farmers’ best management practices and should follow their adjustments rather than dictate their behavior. However, a variety of factors influence farmers’ production practices and, in some cases, farmers may not adopt resilient practices unless these practices also maximize farmers’ short- term net benefits, consistent with crop insurance’s focus on maintaining historic production. By not encouraging agricultural experts to recommend or incorporate resilient agricultural practices into their expert guidance for growers’ good farming practices, RMA is likely missing an opportunity to decrease existing and future fiscal exposures to climate change. Consequently, crop insurance may continue to cover losses resulting from practices that increase vulnerability to climate change. Many private property and casualty insurers and reinsurers have taken some steps since our 2007 report on flood and crop insurance to understand and report on their risks associated with climate change, including participating in industry climate change surveys, and issuing reports that identify and assess climate change risks and trends in weather-related losses. While selected insurers we interviewed said they manage climate change risks through their underwriting practices, we found that the industry faces challenges in preparing for long-term climate change, such as short-term insurance contracts and catastrophe modeling limitations. Since our 2007 report on flood and crop insurance, industry representatives, including insurers, reinsurers, and brokerage firms, have reported on climate change risks they face. For example, many insurers have reported on how they incorporate climate change into their risk management practices through an annual industry survey adopted by the National Association of Insurance Commissioners (NAIC) in 2010. According to an analysis by the California Department of Insurance, which administers the survey, 1,067 insurers participated in the climate risk disclosure survey in 2013, for reporting year 2012. The California Department of Insurance found that 74 percent of insurers that participated in the 2012 survey indicated that they have a process for identifying and assessing climate change-related risks, and 65 percent of insurers indicated that they have encouraged policyholders to reduce the losses caused by climate change-influenced events. According to several insurers and industry representatives we interviewed, insurers generally manage their exposure to weather-related losses associated with climate change through their underwriting practices, which include charging risk- based premiums, managing coverage options, and sharing exposure to losses by purchasing reinsurance. In addition to insurers’ participation in the climate risk disclosure survey, other insurance industry representatives have identified and assessed risks associated with climate change. For example, representatives from three of four reinsurers we interviewed said that they have issued reports on climate change risks or weather-related losses. In addition, one of two reinsurance brokers we interviewed has reported on climate change risks and scientific assessments. The other reinsurance broker we interviewed has issued monthly and annual reports on natural catastrophe losses worldwide based on publicly available industry and climate data that cover topics associated with climate change, including trends in hurricane frequency, global temperatures, and weather-related losses. Private insurers face several challenges to prepare for the long-term effects of climate change. One key challenge is similar to that identified by public insurers: the short-term nature of insurance contracts. Insurers generally write property coverage for 1-year terms. While short-term contracts allow insurers flexibility to manage their exposure to losses through (1) changes in coverage or pricing or (2) not renewing a particular policy, the annual period of each contract makes it challenging for insurers to incorporate long-term climate change projections into their risk management practices. According to several industry representatives we interviewed, insurers and reinsurers use advanced computer modeling, known as catastrophe models, to help estimate risks and price insurance policies. While catastrophe modeling helps insurers and reinsurers estimate potential short-term weather-related losses, they said that incorporating the effects of climate change into catastrophe models remains a challenge. According to a representative of a catastrophe modeling firm we interviewed, catastrophe models that insurers use to price policies generally estimate short-term risks based on historical weather data and past losses and not on long-term climate change projections. Many industry representatives said that advances in catastrophe modeling have improved risk measurement over time. However, estimating weather- related risks still includes elements of uncertainty, and catastrophe modeling information is limited for some weather-related risks. We interviewed representatives from two catastrophe modeling firms and a reinsurance broker that have developed models to estimate risks associated with hurricanes, including wind and storm surge impacts, as well as a model for estimating the effects of weather on crops. Several firms are developing catastrophe models for inland flood risk, which will help estimate potential flood risks, although FEMA officials and several industry representatives said these models are not yet advanced enough to allow insurers to estimate or price flood risk for individual properties. Even with these challenges, several industry representatives said that, barring any additional regulatory restrictions, insurers and reinsurers are positioned to continue managing risks associated with climate change through their ability to set risk-based prices, write coverage, or manage exposure to losses. However, two reports by an industry group and academics have found that past catastrophic events have caused the private property casualty insurance market to retract following a major disaster, placing greater pressure on public insurers to provide coverage. For example, according to a 2012 Insurance Information Institute report, some insurers became insolvent or stopped writing coverage in certain areas following Hurricane Andrew in 1992 and, since then, Florida’s state insurer has grown from a market of last resort to the state’s largest insurer. While some industry representatives we interviewed said insurers and reinsurers have successfully managed weather-related risks through underwriting, some said additional incentives are needed to help the private sector, government programs, and individuals manage their exposure to risks associated with climate change. For example, several industry representatives have said that South Carolina’s tax credit program for homeowners who fortify their homes to make them more resistant to hurricanes, catastrophic wind events, or flooding could encourage individuals to take steps to reduce their exposure to such risks. Some industry representatives we interviewed also suggested that federal, state, and local government adoption of building and land use practices that recognize potential climate change effects would help decrease both public and private exposure to insured and uninsured weather-related losses. For example, land use practices and zoning regulations that recognize potential climate change impacts could help reduce public and private exposure to climate change by limiting new construction or relocating existing structures from hazard-prone areas. In addition, several insurers and reinsurers we interviewed and other industry representatives have suggested that incentives are needed to encourage more state or local governments to adopt resilient building standards to help mitigate against weather-related losses. The Hurricane Sandy Rebuilding Task Force reported in August 2013 that using disaster-resistant building codes is the most effective method to ensure new and rebuilt structures are designed and constructed to a more resilient standard. The task force recommended that states and local governments adopt the most current building codes to ensure that buildings and other structures incorporate the latest science, advances in technology, and lessons learned. A representative from one industry group said updated and more resilient building codes, as well as improved enforcement measures, would help reduce exposure to weather-related risks including hurricanes, floods, wildfires, hail, and wind storms, which are associated with climate change. The industry group reported in December 2011 that while several hurricane-prone states, including Florida, Virginia, and New Jersey, have adopted more resilient building codes, opportunities exist for others to adopt stronger standards and better enforcement measures. To examine these and other issues, in November 2013, the President established a Task Force on Climate Preparedness and Resilience composed of state, local, and tribal leaders to advise the President and an interagency council on how the federal government can support state, local, and tribal preparedness for and resilience to climate change, among other things. The task force’s recommendations, expected in the fall of 2014 according to a White House press release, will address removing barriers to resilient investments—such as improving data and tools available to state and local decision makers, reforming existing policies and federal funding programs, and identifying opportunities to support more climate-resilient investments by states, local communities, and tribes. FEMA and RMA have commissioned climate change studies, incorporated climate change adaptation into their planning, and taken other steps to address aspects of their federal flood and crop insurance programs that create fiscal exposures associated with climate change and extreme weather. However, the agencies continue to face fundamental challenges that send inaccurate price signals to policyholders about their potential risk of loss and increase federal fiscal exposure, and may unintentionally increase their policyholders’ vulnerability to climate change risks. We have previously concluded, among other things, that reducing subsidies and charging full-risk premiums to individual policyholders would decrease the federal government’s fiscal exposure under the flood and crop insurance programs. Executive Order 13653 directs federal agencies to, consistent with their missions, reform policies that may, perhaps unintentionally, increase the vulnerability of natural or built systems, economic sectors, natural resources or communities to climate change. Regarding FEMA’s flood insurance program, the agency is phasing out most subsidies, and is studying how to incorporate the projected effects of climate change, such as future sea-level rise and erosion, into its flood maps, but the mapping advisory council’s recommendations aren’t expected until September 2015. Until the agency implements these changes, some NFIP policyholders will continue to receive inaccurate signals about their current risk of loss, and all may not receive signals about their future risk of loss over the designed lifespan of their insured property. Furthermore, NFIP standards may not fully reflect policyholders’ long-term vulnerability to climate change because these standards are based on current risk that does not reflect future sea-level rise, erosion, or other changes. Without incorporating forward-looking minimum standards into NFIP’s construction and rebuilding requirements, similar to the minimum standard adopted by the Hurricane Sandy Rebuilding Task Force, NFIP policyholders and communities may continue to build and rebuild structures to current NFIP standards that do not necessarily reflect the changing weather-related risks faced over structures’ designed life spans—which could exacerbate federal fiscal exposure amid already strained federal resources. In addition, a variety of agricultural practices are available to farmers that would improve their long-term resilience to climate change, such as practices that would promote long-term water conservation and soil conservation. However, federal crop insurance policyholders may receive inaccurate price signals about their current risks because of subsidized premiums and because of the short-term nature of annual insurance contracts they may not receive signals that reflect the long-term implications of their short-term farming practice decisions. Additionally, federal law prohibits crop insurance from covering losses due to a farmers’ failure to follow good farming practices, although some of these practices may increase the vulnerability of agriculture to climate change, which may not reflect the direction contained in Executive Order 13653. Without working with agricultural experts to recommend or incorporate resilient agricultural practices into their expert guidance for growers’ good farming practices, RMA is likely missing an opportunity to decrease existing and future fiscal exposures to climate change. Consequently, crop insurance may continue to cover losses resulting from practices that increase vulnerability to climate change. We are making two recommendations in this report. To promote forward-looking construction and rebuilding efforts while FEMA phases out most subsidies, we recommend that the Secretary of the Department of Homeland Security direct FEMA to consider amending NFIP minimum standards for floodplain management to incorporate, as appropriate, forward-looking standards, similar to the minimum standard adopted by the Hurricane Sandy Rebuilding Task Force. To promote greater resilience to climate change effects in U.S. agriculture, we recommend that the Secretary of Agriculture direct RMA to consider working with agricultural experts to recommend or incorporate resilient agricultural practices into their expert guidance for growers, so that good farming practices take into account long- term agricultural resilience to climate change. We provided a draft of this report to USDA, Commerce, DHS, and Treasury for review and comment. USDA provided written comments, which are reproduced in appendix II; Commerce provided technical comments, which we incorporated as appropriate; DHS provided written comments, which are reproduced in appendix III; and the Department of the Treasury did not provide comments. USDA did not specify their agreement or disagreement with our recommendation, and DHS agreed with our recommendation. In its written comments, USDA referenced our finding that RMA’s good farming practices focus on maintaining historic crop yields over the term of the annual insurance contract and that some of these practices may unintentionally increase the vulnerability of agriculture to climate change, contrary to Executive Order 13653’s directive for agencies to manage vulnerabilities to climate change. USDA stated that RMA does not direct producers to take or carry out certain agronomic practices, but rather relies on guidance from agricultural experts in the area. USDA also stated that to the extent that agricultural experts in an area recommend or incorporate resilient practice recommendations into their expert guidance for growers in a given area, then RMA would consider such in its good farming practice determination for coverage of losses. While RMA may not direct producers to follow certain agronomic practices, it can provide incentives for farmers' adoption of resilient practices by working with agricultural experts to recommend or incorporate resilient practices into their expert guidance for growers’ good farming practices and therefore eligibility determinations for claim payments. As we note in the report, RMA has an opportunity to potentially reduce agriculture's long-term vulnerability to climate change by encouraging the adoption of resilient practices now. For that reason, we recommend that RMA consider working with agricultural experts to recommend or incorporate resilient agricultural practices into their expert guidance for growers so that good farming practices take into account long-term agricultural resilience to climate change. In addition, USDA refers to one of our identified program challenges to insurers that federal law encourages them to provide affordable insurance to policyholders through subsidized rates, which may reduce policyholders’ incentives to manage risk by giving them inaccurate signals about the level of risk. USDA states that, while the federal government does subsidize a significant share of a producer’s premium, every producer receives a notification from his or her insurance provider, which explains how much premium was paid by the government and how much is to be paid by the producer. Therefore, USDA states that all producers are made aware of the full risk. While notifying farmers of the subsidy amount provides useful information, farmers do not bear the true cost of their risk of loss. As a result, the market signal sent by federal insurers for the price of the policyholders’ risk is the amount the policyholders actually pay. We have modified the report and our recommendation to respond to USDA’s comments. In its written comments, DHS concurred with our findings that FEMA has taken action to better understand and prepare for climate change’s potential effects and that FEMA faces challenges that may limit its ability to minimize long-term federal exposure to climate change. DHS also concurred with our recommendation to consider amending NFIP minimum standards for floodplain management to incorporate, as appropriate, forward-looking standards, similar to the minimum standard adopted by the Hurricane Sandy Rebuilding Task Force. In particular, DHS stated that FEMA has already taken action to consider amending NFIP minimum standards by (1) commissioning a 2006 study to assess the adequacy of the NFIP’s minimum standards, (2) participating in a 2004 forum with key stakeholders about amending the NFIP minimum standards, and (3) encouraging communities to participate in the NFIP’s Community Rating System, which offers discounted flood insurance rates in exchange for a community’s proactive efforts to reduce its flood risk. We do not believe that these actions meet the intent of our recommendation for several reasons. First, commissioning a study and participating in a policy forum that discussed several aspects of NFIP minimum standards are not sufficient evidence that FEMA officials considered adopting forward-looking standards. FEMA has not provided documentation of internal policy discussions or other actions taken in response to the study or forum’s findings. Additionally, the study commissioned by FEMA is 8 years old, and it therefore does not reflect scientists’ current understanding of sea-level rise and other climate change effects identified in more recent National Climate Assessments. Similarly, the discussion with stakeholders occurred 10 years ago, and it therefore does not reflect the current state of the program, stakeholders’ current understanding of climate-related risks, the adequacy of FEMA’s floodplain maps, and recent advances in mapping technology. Moreover, the Community Rating System is voluntary and, as of March 2014, 1,296 of the nearly 22,000 NFIP-participating communities are in the program. These communities represent about 67 percent of NFIP policyholders. Accordingly, amending the NFIP minimum standards could reach the over 20,000 nonparticipating communities. As the summary report from the 2004 forum notes, building in a margin of error such as adding a foot or more to the calculated base flood evaluation for flood hazard assessment at the outset of the program could have avoided many of the program’s current problems regarding uncertainty—including the uncertainty of climate change. We continue to believe that FEMA should consider amending the NFIP minimum standards to incorporate, as appropriate, forward-looking standards, and we therefore do not consider the recommendation resolved and closed. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Agriculture, the Secretary of Commerce, the Secretary of Homeland Security, the Secretary of Treasury, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To determine how federal and private sector exposure to losses has changed since our 2007 report on flood and crop insurance and how climate change has or may affect insured and uninsured losses, we analyzed federal and private sector exposure data from 2007 through 2013. To understand longer-term trends in exposure, we also analyzed federal exposure data from 2000, and we reviewed scientific studies and other available literature on climate change. Specifically, we collected agency data on the total value of property insured under the National Flood Insurance Program (NFIP) and the federal crop insurance program. For the private sector, we collected estimates of the total value insured by property-casualty insurers—excluding auto—as determined by two industry catastrophe modeling firms, AIR Worldwide and Risk Management Solutions. We assessed the reliability of the agency and industry data we collected and determined it was sufficiently reliable for describing the total value of property insured. We also conducted a literature review to identify pertinent studies on how climate change has or may affect insured and uninsured losses. Specifically, we searched for scholarly articles, industry articles and reports, think-tank reports, conference reports, and government publications published from 2007 onward. Through the literature search, we identified a number of studies that discussed climate change’s potential effects on insured and uninsured losses and stakeholder perspectives of climate change risks. For summarizing the effect of climate change on insured and uninsured losses, we limited our review to scientific, empirical studies that evaluated the historical loss record or projected future losses. Each of these studies used various techniques to discern the respective influences on the loss record of changes in exposure (e.g., wealth, population, insurance penetration) versus changes in weather patterns. Based on these criteria, we identified a total of 64 studies that were relevant and applicable to our report, 20 of which directly addressed the issue of climate change’s effect on losses. We reviewed the methodologies of these studies to ensure that they were sound and determined that they were sufficiently reliable for describing the potential effect of climate change on insured and uninsured losses. To determine how public insurers are preparing for climate change, we reviewed agency documents related to climate change. We reviewed the Department of Homeland Security’s (DHS) 2012 Climate Change Adaptation Roadmap, the 2011 Federal Emergency Management Agency’s (FEMA) Climate Adaptation Policy Statement, and a 2013 AECOM climate change study which FEMA commissioned. We also reviewed the Risk Management Agency’s (RMA) 2012 Climate Adaptation Plan and 2011-2015 Strategic Plan, RMA’s cooperative agreement with Oregon State’s PRISM group, documents related to the agency’s 2011 rate change calculations, and a 2009 RTI climate change and modeling study which the agency commissioned. In addition, at the federal level, we interviewed officials from NFIP, RMA, the U.S. Department of Agriculture, the National Oceanic and Atmospheric Administration, the Federal Insurance Office, and Council on Environmental Quality. We also reviewed the Biggert-Waters Flood Insurance Reform Act of 2012, the Homeowner Flood Insurance Affordability Act of 2014, and the Agricultural Act of 2014. At the state level, we interviewed officials from the Department of Insurance in three states and the wind insurers in two of those three states. We selected a nonprobability sample of states based on the following factors: identified by an industry forecaster as most at-risk to natural disaster, whether the state had an insurance pool administered by a governmental entity or entity established pursuant to state law, and whether the state had experienced an extreme weather-related event in the past decade. Because we used a nonprobability sample to select states, our results are not generalizable to all 50 states; however, our results can provide illustrative information. We also spoke with several academic experts on agriculture and modeling identified from publications, other experts, and our prior work. Further, to determine how private insurers and reinsurers are preparing for climate change, we reviewed over 20 industry reports and other information from industry representatives, and we interviewed a nonprobability sample of representatives from four insurers and four reinsurers, as well as catastrophe modeling firms, reinsurance brokers, and industry groups representing more than 1,000 large and small property casualty insurers. We selected a sample of four insurers to interview based on market share, experience with flood or crop insurance, and involvement in climate change issues. For insurers selected based on market share, we identified those with a large share of the U.S. property casualty market (about 50 percent cumulative share of the U.S. property casualty market) and others with a smaller share of the market (1 percent or less), based on 2012 industry data. Among these firms, we selected four insurers with diverse experience in the public and private insurance markets, and the four firms we interviewed represented over a 25 percent share of the U.S. property and casualty insurance market, and a 15 percent share of the private crop insurance market. We also interviewed representatives from a sample of 4 reinsurers, from among 10 reinsurance firms with the greatest share of the U.S. reinsurance market. The reinsurers we interviewed represented over a 30 percent share of the U.S. reinsurance market, based on 2012 industry data. In addition, we interviewed 11 other industry participants, as well as academic researchers for context. We identified industry participants to interview through our prior work and relevant publications, as well as suggestions from other interviewees. Industry participants we interviewed included representatives from two catastrophe modeling firms, two reinsurance brokerage firms, and seven industry groups representing insurers, reinsurers, and others. Among the industry groups we interviewed, two groups represented more than 1,000 large and small property casualty insurers. Because we used a nonprobability sample to select interviewees, our results are not generalizable to industry as a whole but provide illustrative examples. We conducted this performance audit from November 2013 to October 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Michael Hix (Assistant Director), Charles Bausell, Alicia Puente Cackley, Heather Chartier, Kendall Childers, Melinda Cordero, John Delicath, Steven Elstein, Diantha Garms, Cindy Gilbert, Kathryn Godfrey, Susan Irving, Richard Johnson, Jessica Lemke, Armetha Liles, Susan E. Offutt, Jeanette Soares, Vasiliki Theodoropoulos, Frank Todisco, Lisa Van Arsdale, Patrick Ward, Eugene Wisnoski, and Franklyn Yao made important contributions to this report.
The May 2014 National Climate Assessment indicates that the frequency and/or severity of many weather and climate extremes may increase with climate change. Public and private property insurers can bear a large portion of the financial impact of such weather-related losses. In the public sector, federal insurance includes NFIP, managed by FEMA, and the federal crop insurance program, managed by RMA. GAO was asked to review climate change's effect on insurers. This report examines (1) how federal and private exposure to losses has changed since GAO's 2007 report on the subject, and what is known about how climate change may affect insured and uninsured losses; (2) how public insurers are preparing for climate change, and any challenges they face; and (3) how private insurers are preparing for climate change and any challenges they face. GAO reviewed 20 studies, examined federal and private sector data on exposure to losses from 2000 to 2013, reviewed agency documents, and interviewed agency officials and a nonprobability sample of eight insurers and reinsurers. Since GAO's 2007 report on flood and crop insurance, exposure growth in hazard-prone areas has increased losses, and climate change and related increases in extreme weather events may further increase such losses in coming decades. Scientific and industry studies GAO reviewed generally found that increasing growth and property values in hazard-prone areas have increased losses to date and that climate change may compound this effect. From 2007 through 2013, data from the Federal Emergency Management Agency (FEMA) and the Risk Management Agency (RMA) show that exposure to potential losses for insured property grew from $1.3 trillion to $1.4 trillion (8 percent). According to industry data, private sector exposure to such loss grew from $60.7 trillion to $66.5 trillion (10 percent) from 2007 through 2012. Federal exposure to uninsured loss also increased by 46 percent over this period, based on a 2013 analysis by the Congressional Research Service. According to the studies GAO reviewed, climate change may substantially increase losses by 2040 and increase losses from about 50 to 100 percent by 2100. FEMA and RMA have taken some steps to better understand and prepare for climate change's potential effects under the National Flood Insurance Program (NFIP) and the federal crop insurance program by, for example, commissioning climate change studies. However, both agencies face challenges that may limit their ability to minimize long-term federal exposure to climate change. For example, because of the short-term nature of insurance (i.e., contracts typically estimate and communicate risk of property losses for the 1-year term of a policy), FEMA and RMA face a challenge in encouraging policyholders to reduce their long-term exposure to climate change risks. Specifically, flood insurance policyholders who build to NFIP standards—which are based on current flood risk and not on long-term risks—may unintentionally increase their vulnerability to climate change as sea-level rises. Also, while federal law prohibits crop insurance from covering losses due to a farmers' failure to follow good farming practices, such as various irrigation methods, some of these practices maintain short-term production but may inadvertently increase the vulnerability of agriculture to climate change through increased erosion and inefficient water use. A recent executive order directed federal agencies to reform policies that may, perhaps unintentionally, increase the vulnerability of economic sectors or communities to climate change. Without encouraging NFIP and crop insurance policyholders to adopt building and agricultural practices that reduce long-term risk, FEMA and RMA may send policyholders signals that unintentionally encourage their vulnerability to climate change, counter to the direction of the executive order, which could exacerbate federal exposure to losses. Many private insurers and reinsurers have taken steps since 2007 to better understand and prepare for climate change effects and related challenges, including participating in industry climate change surveys, and issuing reports that identify and assess climate change risks and trends in weather-related losses. According to industry officials, they can manage their exposure to climate change and related challenges through risk-based premiums, reinsurance, and other practices, although estimating weather-related risks still includes elements of uncertainty. GAO recommends that FEMA and RMA take additional steps to encourage flood and crop insurance policyholders to adopt building and agricultural practices that reduce long-term risk and federal exposure to losses. FEMA agreed with GAO's recommendation, and RMA neither agreed nor disagreed with GAO's recommendation.
I am pleased to be here today to discuss the implementation of the Paperwork Reduction Act of 1995 (PRA). As you requested, I will summarize our recent reports and testimonies on the PRA and provide our analysis of data on expired paperwork authorizations that were recently submitted to the Subcommittee by the Office of Management and Budget (OMB). In brief, our reports and testimonies all indicate that federal paperwork burden estimates have increased dramatically since the PRA was first enacted in 1980, although some of that increase is due to changes in measurement techniques. Agencies’ burden estimates have continued to increase since 1995 despite congressional expectations for reductions in federal paperwork burden. The increase in the governmentwide paperwork estimate appears largely attributable to continued increases in the Internal Revenue Service’s (IRS) estimates. However, IRS said these increases are due to increased economic activity and new statutory requirements—factors it does not control. In addition,we believe that OMB’s Office of Information and Regulatory Affairs (OIRA) has not fully satisfied all of the responsibilities that the PRA assigns to that Office. Regarding the data that OMB provided to the Subcommittee, we believe it indicates a troubling disregard by agencies for the requirement that they obtain OMB approval before collecting information from the public. Using OMB’s measure of the costs associated with federal paperwork, we estimate that agencies have imposed at least $3 billion in unauthorized burden in recent years. OMB can do more to encourage agencies that are not complying with the PRA to come into compliance, and we offer some options in that regard. Before discussing these issues in detail, it is important to recognize that some federal paperwork is necessary and can serve a useful purpose. Information collection is one way that agencies carry out their missions. For example, IRS needs to collect information from taxpayers and their employers to know the amount of taxes owed. Next spring, the Bureau of the Census will distribute census forms to millions of Americans that will be used to apportion congressional representation and for a myriad of other purposes. However, federal agencies have an obligation under the PRA to keep the paperwork burden they impose as low as possible. The original PRA of 1980 established OIRA within OMB to provide central agency leadership and oversight of governmentwide efforts to reduce unnecessary paperwork and improve the management of information resources. Under the act, OIRA has overall responsibility for determining whether agencies’ proposals for collecting information comply with the act. Agencies must receive OIRA approval for each information collection request before it is implemented. OIRA is also required to keep Congress “fully and currently informed” of the major activities under the act and must report to Congress on agencies’ progress toward reducing paperwork. To do so, OIRA develops an Information Collection Budget (ICB) by gathering data from executive branch agencies on the total number of “burden hours” OIRA approved for collections of information at the end of the fiscal year and agency estimates of the burden for the coming fiscal year. The PRA of 1995 defines the term “collection of information” as “obtaining, causing to be obtained, soliciting, or requiring the disclosure to third parties or the public, of facts or opinions by or for an agency, regardless of form or format.” Burden hours has been the principal unit of measure of paperwork burden for more than 50 years and has been accepted by agencies and the public because it is a clear, easy-to-understand concept. However, it is important to recognize that these estimates have limitations. Estimating the amount of time it will take for an individual to collect and provide information or how many individuals an information collection will affect is not a simple matter. Therefore, the degree to which agency burden-hour estimates reflect real burden is unclear. Nevertheless, these are the best indicators of paperwork burden available, and we believe they can be useful as long as their limitations are kept in mind. Although referred to as a “budget,” the ICB does not limit the number of burden hours an agency is permitted to impose. As figure 1 shows, federal agencies’ annual paperwork burden-hour estimate rose from about 1.5 billion hours in 1980 to about 7.0 billion hours by the end of fiscal year 1995—just before the PRA of 1995 took effect. The figure also shows the degree to which IRS’ paperwork estimate drives the governmentwide estimate. Statement Paperwork Reduction Act: Burden Increases and Unauthorized Information Collections Burden hours (in billions) As you can see, a large part of the increase in the governmentwide burden- hour estimate during this period occurred in 1989, when IRS changed the way it calculated its estimates. That reestimate increased the agency’s paperwork estimate by 3.4 billion hours and nearly tripled the governmentwide burden-hour estimate. However, it is important to remember that the amount of paperwork actually imposed on the public did not change, only IRS’ estimate of the burden that was already there. In every year since 1989, IRS has accounted for nearly 80 percent of the governmentwide burden estimate. burden to the “maximum practicable opportunity.” Therefore, if federal agencies had been able to accomplish the reduction in burden contemplated by the PRA for the 3-year period ending on September 30, 1998, the 7.0 billion burden-hour estimate would have fallen 25 percent, or to less than 5.3 billion hours. However, as figure 2 shows, the anticipated 25-percent reduction in burden during this 3-year period did not happen. In fact, the recently developed ICB for fiscal year 1999 shows that the governmentwide burden-hour estimate actually declined by less than one-half of 1 percent during this period. estimate. Therefore, as illustrated in figure 1, changes in IRS’ estimate can have a highly significant—and even determinative—effect on the governmentwide total. As figure 3 shows, non-IRS departments and agencies estimated that, in the aggregate, they had reduced their paperwork burden by more than 23 percent between fiscal years 1995 and 1998—close to the 25-percent burden-reduction goal envisioned in the PRA. However, IRS’ burden-hour estimate increased by 6.9 percent during this period. That increase offset the estimated reductions in the other agencies and was largely responsible for the relatively minor decline in the governmentwide paperwork burden-hour estimate. Also, as I will discuss later, the estimate for the non-IRS agencies’ reductions was overstated. their aggregate burden to increase by more than 4 percent between fiscal years 1998 and 2000. However, IRS will again lead the way, accounting for more than 85 percent of the governmentwide increase in estimated burden during this period. raised the threshold for which businesses had to maintain receipts to substantiate expenses for travel, entertainment, gifts, and listed property, thereby reducing burden by an estimated 12.5 million hours during fiscal year 1997; and required those who file 250 or more of IRS Form 1042-S (used by withholding agents to report income and tax withheld from payees) to do so on magnetic media, thereby producing an estimated burden reduction of 21.1 million hours during fiscal year 1997. As a result of these and other actions, IRS and other parts of the Department of the Treasury said they had eliminated more than 100 million hours of paperwork burden between fiscal years 1995 and 1998. However, despite these efforts, IRS’ overall burden estimate increased by about 400 million hours during this period. The ICBs that OIRA developed during this period indicated that this net increase was because of increased economic activity and new legislation that required IRS to establish new information collections. For example, the ICB for fiscal year 1999 said the Taxpayer Relief Act of 1997 (P.L. 105-34) significantly increased IRS’ paperwork burden, much of which was caused by new provisions for the calculation and reporting of taxes owed on capital gains. Overall, the ICB indicated that the Taxpayer Relief Act had increased burden by more than 92 million hours as of December 1998. IRS officials told us that these factors are outside of the agency’s control and have caused the recent increases in its burden-hour estimates. They also said the agency would not be able to reduce its paperwork burden if new statutes requiring information collections continue to be enacted and unless changes are made to the substantive requirements in the current tax code. Our July 1998 report examined the way in which OIRA has carried out some of its responsibilities under the PRA. Although OIRA pointed to a number of actions it had taken in each area of its responsibilities that we examined, those actions often appeared to fall short of the act’s requirements. practicable opportunity” in each agency. The act’s legislative history suggests a relationship between the agency goals and the governmentwide goals, and it is logical to assume that the agency-specific goals would be the means by which the governmentwide goals would be achieved. However, OIRA says that the agency-specific goals may not total to the governmentwide goal because of the agencies’ statutory and program responsibilities. The PRA of 1995 also required OIRA to conduct pilot projects to reduce federal paperwork burden. However, as of last July, OIRA had not formally designated any such pilot projects. OIRA officials told us that other burden-reduction efforts are under way, and that pilot projects used to satisfy another statute meet the PRA’s requirements. However, in most cases, those other pilots predated the act and did not appear to have been initiated in response to the act’s requirements. The PRA also required OIRA to develop and maintain a governmentwide strategic plan for information resources management (IRM), which was defined in the act as the process of managing those resources to accomplish agency missions and improve agency performance. OIRA officials said that information contained in their annual reports to Congress under the PRA, the budget, and other documents satisfy this requirement. However, those documents do not appear to contain all of the elements that the PRA requires in a govermentwide IRM strategic plan. Similarly, the PRA requires OIRA to periodically review selected agencies’ IRM activities, and OIRA officials and staff said they do so through their reviews of agencies’ information collection requests, OMB’s budget formulation and execution process, and other means. However, none of the mechanisms that they mentioned would allow OIRA to address all of the elements that the PRA requires in the reviews. OIRA’s lack of action in some of these areas may be a function of its resource and staffing limitations. As we reported last July, OIRA has taken between 3,000 and 5,000 actions on agencies’ information collection requests in each year since the PRA of 1995 was enacted. At the same time, the 20 to 25 OIRA staff members assigned to this task were responsible for reviewing the substance of about 500 significant rules each year and carrying out other responsibilities as well. Although the number of PRA-related actions that OIRA has taken each year has been relatively constant since 1980, the number of OIRA desk officers responsible for those reviews has declined by more than 35 percent between 1989 and 1997. The second general issue you asked us to address involves data that OIRA recently sent to the Subcommittee concerning expired authorizations to collect information. The PRA prohibits an agency from conducting or sponsoring a collection of information unless (1) the agency has submitted the proposed collection and other documents to OIRA, (2) OIRA has approved the proposed collection, and (3) the agency displays an OMB control number on the collection. The act also requires agencies to establish a process to ensure that each information collection is in compliance with these clearance requirements. Finally, the PRA says no one can be penalized for failing to comply with a collection of information subject to the act if the collection does not display a valid OMB control number. OMB may not approve a collection of information for more than 3 years. In his March 3, 1999, letter to you, Chairman McIntosh, the Acting OIRA Administrator described the results of OIRA staff’s review of 91 paperwork clearance dockets that it conducted at your instigation. In one part of the letter, the Acting Administrator described the status of 52 information collections for which OIRA approval had expired. He indicated that 17 of these collections were still being carried out by the agencies after OIRA’s approval had expired, which was in violation of the PRA’s requirements. A table enclosed with the Acting Administrator’s letter provided the details for each of these collections, including the date that OMB’s authorization expired and the annual burden-hour estimate for each collection. The table indicated that some of these information collections had continued to be administered for more than 2 years after OIRA’s approval had expired, and one had been out of compliance for more than 3 years. The table also indicated that at least one of these collections had been disapproved by OIRA, but the agency (the Department of Agriculture) went ahead with the information collection anyway. Using the information in the Acting Administrator’s letter, we prepared table 1, which shows, by agency and information collection, the total number of burden hours that have been imposed in violation of the PRA since OMB’s authorizations expired or were disapproved. The table also shows that, for all 17 collections, the agencies have continued to impose nearly 64 million burden hours of unauthorized paperwork even though OMB’s approval had expired. Statement Paperwork Reduction Act: Burden Increases and Unauthorized Information Collections Estimated costs in millions ($) the burden hours associated with the collection, and that wage rate should be “loaded” to include overhead and fringe benefit costs. OMB also noted that the hourly cost of a technical employee might well exceed $40. In its 1997 report to Congress on the costs and benefits of federal regulations,OMB estimated the “opportunity cost” associated with filling out tax forms at $26.50 per hour. Therefore, multiplying IRS’ 5.3 billion burden-hour estimate times $26.50 yielded a $140 billion cost of tax compliance paperwork. As table 1 shows, multiplying the nearly 64 million burden hours of paperwork imposed in violation of the PRA times this estimate of opportunity cost yields a dollar value of nearly $1.7 billion of unauthorized paperwork burden from these 17 information collections. The Acting Administrator’s March 3 letter also indicated that OMB’s authorization for another 11 collections had expired and were later reinstated, but not before they were used to collect information in violation of the PRA’s requirements. The table enclosed in the letter provided the annual burden-hour estimate and the period that elapsed without OMB authorization. Although the authorizations for most of these collections had lapsed for about 6 months or less, one collection was unauthorized for nearly 2 years. Using this information, we prepared table 2, which shows, by agency and information collection, the total number of burden hours that were imposed in violation of the PRA between the date that OMB’s authorizations expired and the date the authorizations were reinstated. For all 11 collections, the agencies imposed more than 47 million hours of unauthorized burden. Using the same $26.50 per hour “opportunity cost” multiplier, these agencies imposed nearly $1.3 billion in paperwork burden in violation of the PRA. Estimated costs in millions ($) Title CHAMPUS claim form Medicare/ Medicaid Claim 12/31/96 Premarket Approval of Medical Devices Home Health Agencies Info for Medicare Good Faith Estimate and Special Information Employment for Low and Very Low Income Employment Eligibility Verification12/31/97 Repair and Maintenance Eligibility Verification Report Customer Survey for EO 12862 Application for Medical, Funeral, etc. Totals The number of burden hours between expiration and reapproval was calculated by multiplying the annual burden hour requirement by an elapsed-time multiplier (number of months elapsed since approval expiration and reapproval, divided by 12). violation of the PRA. In dollar terms, that amounts to nearly $3 billion in unauthorized burden. However, this is clearly not the full extent of unauthorized information collections that have taken place. The ICB that OIRA recently developed identifies 800 violations of the PRA in fiscal year 1998. These violations included both other collections with expired OMB authorizations (some of which were subsequently reauthorized) and information collections that were never authorized in the first place. Some agencies (the Departments of Agriculture, Health and Human Services, and Veterans Affairs) had more than 100 PRA violations. As disconcerting as these violations are, even more troubling is that OIRA’s ICB reflects the hours associated with unauthorized information collections ongoing at the end of the fiscal year as burden reductions. However, the public has seen no real reduction in paperwork burden associated with these information collections; although the agencies are still requiring the paperwork, OMB is no longer counting the burden because its authorization had expired. As a result, OMB credits agencies for burden-reduction accomplishments that have not been achieved, when in reality the agencies are actually violating the PRA. When OMB’s approval for an information collection expires, OMB subtracts the estimated annual number of burden hours associated with the collection from the agency’s total. For example, when OMB’s approval for the Department of Agriculture’s (USDA) Noninsured Crop Disaster Assistance Program’s information collection expired on May 31, 1998, the estimated 8.1 million burden hours imposed by this collection each year was subtracted from OMB’s database. However, USDA continued to collect this information without OMB’s approval. Because this violation was ongoing as of September 30, 1998, the estimate of USDA’s paperwork burden at the end of fiscal year 1998 in the ICB for fiscal year 1999 was inappropriately recorded as being reduced by 8.1 million hours. an additional 3 million hours of estimated burden. Adding these 3 million hours and the 15 million hours from the five collections listed in the Acting Administrator’s letter to the 72 million hours reported in the ICB indicates that USDA’s burden estimate should have been about 90 million hours. Although the ICB indicated that USDA had reduced its estimated burden by 59 million hours (45 percent) by the end of fiscal year 1998, the actual reduction appears to have been about 41 million hours (31 percent). Similar adjustments appear to be needed in other agencies’ estimates as well. In his March 3 letter, the Acting Administrator said OIRA believed that compliance with the PRA is important, and that OIRA desk officers have worked closely with agency staff to stress the importance of full and timely compliance with the act. He also said that OIRA learns of agency violations from public comment and through direct monitoring of reporting from the agencies. The Acting Administrator said that OIRA’s database tracks and records OIRA activities in reviewing agency submissions for clearance under the PRA. However, he said the database is not designed or able to identify what he termed “bootleg” information collections that did not obtain OMB approval, or for which its approval had expired. Last November, Chairman McIntosh, you suggested that OIRA prepare and submit a monthly report listing expirations of OMB PRA approval. In response, the Acting Administrator said OIRA would add information about expired approvals to OMB’s Internet home page. As a result, he said potential respondents would be able to inform the collecting agency, OMB, and Congress of the need for the agency to either obtain reinstatement of OMB approval or discontinue the collection. the database to identify information collections whose authorizations are about to expire, and therefore perhaps prevent violations of the act. The PRA of 1995 requires that OIRA’s annual report to Congress include a list of all violations of the act. OIRA reported 39 pages of violations in the ICB for fiscal year 1998, broken down into collections for which authorizations had expired and collections for which authorizations were never initially provided. The ICB for fiscal year 1999 contains 59 pages of these violations. However, OIRA officials and staff told us that they have no authority to do much more than publish the list of violations and inform the agencies directly that they are out of compliance with the act. We do not agree that OIRA is as powerless as this explanation would suggest. If an agency does not respond to an OIRA notice that one of its information collections is out of compliance with the PRA, the Acting Administrator could take any number of actions to encourage compliance, including any or all of the following: Publicly announce that the agency is out of compliance with the PRA in meetings of the Chief Information Officer’s Council and the President’s Management Council. Notify the “budget” side of OMB that the agency is collecting information in violation of the PRA and encourage the appropriate resource management office to use its influence to bring the agency into compliance. Notify the Vice President of the agency’s violation. (The Vice President is charged under Executive Order 12866 with coordinating the development and presentation of recommendations concerning regulatory policy, planning, and review.) Place a notice in the Federal Register notifying the affected public that they need not provide the agency with the information requested in the expired collection. OIRA could also notify agencies that the PRA requires them to establish a process to ensure that each information collection is in compliance with the act’s clearance requirements. Agencies that repeatedly collect information without OMB approval or after OMB approval has expired are clearly not complying with this requirement. Although OIRA’s current workload is clearly substantial, we do not believe these kinds of actions would require significant additional resources. Primarily, the actions require a commitment to improve the operation of the current paperwork clearance process. This completes my prepared statement. I would be pleased to answer any questions. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the implementation of the Paperwork Reduction Act (PRA). GAO noted that: (1) GAO's reports and testimonies all indicate that federal paperwork burden estimates have increased dramatically since the PRA was first enacted in 1980, although some of that increase is due to changes in measurement techniques; (2) agencies' burden estimates have continued to increase since 1995 despite congressional expectations for reductions in federal paperwork burden; (3) the increase in the governmentwide paperwork estimate appears largely attributable to continued increases in the Internal Revenue Service's (IRS) estimates; (4) however, IRS said these increases are due to increased economic activity and new statutory requirements--factors it does not control; (5) in addition, GAO believes that the Office of Management and Budget's (OMB) Office of Information and Regulatory Affairs has not fully satisfied all of the responsibilities that the PRA assigns to that office; (6) regarding the data that OMB provided to the House Committee on Government Reform, Subcommittee on National Economic Growth, Natural Resources and Regulatory Affairs, GAO believes it indicates a troubling disregard by agencies for the requirement that they obtain OMB approval before collecting information from the public; (7) using OMB's measure of the costs associated with federal paperwork, GAO estimates that agencies have imposed at least $3 billion in unauthorized burden in recent years; and (8) OMB can do more to encourage agencies that are not complying with the PRA to come into compliance, and GAO offers some options in that regard.
Unaccompanied personnel who are not assigned to government-owned housing, or are above certain pay grades, are authorized to receive the BAH, and the amount of the allowance is based on factors that include a servicemember’s pay grade, dependency status, and geographic location. Additionally, each service determines pay grades at which personnel are no longer assigned to government-owned housing. Junior unaccompanied personnel are generally required to live in government- owned unaccompanied housing on their installation, commonly referred to as barracks (Army and Navy), dormitories (Air Force), or bachelor enlisted quarters (Marine Corps), and may be eligible for the housing allowance only if on-installation, government-owned housing is not available. In table 1, we list the pay-grade thresholds each military service has established for junior unaccompanied personnel permanently assigned to installations in the United States and required to live in government- owned housing. In 1995, DOD adopted a new construction standard that called for more space and increased privacy in new government-owned housing for servicemembers permanently assigned to an installation. The new standard, which was modified in 2007, provided each junior unaccompanied servicemember with a private sleeping room and a kitchenette and bath shared by one other member. DOD justified the adoption of the new standard primarily as an investment in quality of life aimed at improving military readiness and retention. All the military services except the Marine Corps accepted the new standard, and developed various initiatives to implement it, as discussed below. The Marine Corps believed that the new standard did not allow for the unit cohesion and team building needed to reinforce Marine Corps values and develop a stronger bond among junior Marines. Therefore, the Marine Corps obtained a permanent waiver from the Secretary of the Navy to use a different design standard—one sleeping room and bath shared by two junior Marines. According to a February 2013 DOD report to Congress on government-owned housing for unaccompanied personnel, from fiscal years 1996 through 2012, DOD spent over $20 billion of military construction funds to build and modernize on-installation housing for unaccompanied personnel. Army: Between fiscal years 1996 and 2012, the Army spent over $12 billion of military construction funds on its barracks modernization program to modernize housing for all Army unaccompanied personnel permanently assigned to an installation. The renovated facilities meet the current DOD standard configuration, and each module includes two bedrooms, one bathroom, a cooking area, and appliances. The housing complex also includes laundry facilities. Navy: The Navy spent about $2.5 billion of military construction funds between fiscal years 1996 and 2012 on improving the condition of its housing for unaccompanied personnel. A key component of the Navy’s modernization program for unaccompanied housing is the Homeport Ashore program, which was created to improve the quality of life among ship-based junior sailors by moving them off ships and into unaccompanied housing on shore while their ships were docked in their homeport. The Navy expects to complete this initiative by fiscal year 2016 utilizing both privatization and military construction authorities. However, the BAH statute (37 U.S.C. § 403(f)) prohibits E- 1 to E-3’s without dependents on sea duty from receiving the BAH, and privatized housing projects are not generally feasible unless military members are receiving a housing allowance. Congress, in the Bob Stump National Defense Authorization Act for Fiscal Year 2003, amended the housing privatization authorities by adding a new section (10 U.S.C. § 2881a) that authorized the Navy to carry out up to three pilot unaccompanied housing privatization projects in which junior enlisted members without dependents could be authorized higher rates of partial BAH to pay their rent. Per 37 U.S.C § 403(n), partial BAH is a payment at a rate determined by the Secretary of Defense based on a specified historical rate (typically around $8 per month, as of 2011) paid to members not authorized to receive BAH, such as those assigned to live aboard ships or in government quarters. The 10 U.S.C. §2881a authority expired on September 30, 2009, and the Navy executed two of the three authorized projects prior to the expiration of the authority. Air Force: The Air Force spent almost $3 billion of military construction funds from fiscal years 1996 to 2012 on modernizing its dormitories for unaccompanied personnel. Air Force housing officials told us that the service has adequate housing for all its airmen. The Air Force also implemented a policy in 1996 whereby each unaccompanied airman permanently assigned to an installation is assigned to a private bedroom. In 2006, the Air Force started requiring that its dorms be built or renovated according to a four-bedroom module design, called Dorms-4-Airmen, specifically for unaccompanied personnel in the pay grades from E-1 to E-3 and E-4 with less than 3 years of service. The design was based on Air Force criteria, detailed analysis of square- footage requirements and constraints, and prototype development. It was designed to achieve the goal of providing privacy while boosting social interaction. Marine Corps: The Commandant of the Marine Corps approved the Bachelor Enlisted Quarters campaign plan in 2006. The goals of the plan were to eliminate existing space deficiencies, demolish inadequate housing, and achieve the new standard of one sleeping room and bath shared by two junior Marines by fiscal year 2014. From fiscal years 1996 to 2012, the Marine Corps spent about $3.5 billion of military construction funds to replace and renovate its housing for unaccompanied personnel. In June 1997, DOD and the Office of Management and Budget (OMB) agreed to a set of guidelines that would be used as a frame of reference for scoring privatization projects. The implications of scoring depend on which MHPI authority will be used. For example, the guidelines state that if a project provides an occupancy guarantee, then funds for the project must be available and obligated “up front” at the time the government makes the commitment of resources. In other words, if a project provides an occupancy guarantee, then the net present value of the guarantee— the cumulative value of the rents to be paid for the housing over the entire contract term—must be obligated at the beginning of the project. According to Army and Navy officials, none of the privatized projects for housing unaccompanied personnel discussed in this report include an occupancy guarantee. From 1997 to 2011, the services conducted several analyses of the costs and suitability of privatization as a financing method for their housing needs for unaccompanied personnel. Using different methods, such as business-case and life-cycle cost analyses, and using different assumptions about how repairs and upkeep for housing would be funded, the services reached different conclusions about the potential for cost savings from using either privatization or the traditional government- funded military construction approach. The Army concluded that privatization is feasible but more costly in most cases, while the Navy found that privatization is feasible in certain locations. The Air Force and Marine Corps concluded that privatization was not desirable for housing their unaccompanied personnel. The Army conducted three sets of analyses to determine whether to privatize housing for unaccompanied personnel. These analyses used different scenarios and data gathered from multiple locations. The Army documented the analytical processes used, and communicated its conclusions to service leadership. In 2004, the Army formed a task force to assess the feasibility and desirability of privatization of unaccompanied personnel housing. Task-force members conducted the study over 6 months, visiting six sites, including Fort Detrick, Maryland; Fort Leonard Wood, Missouri; Fort Lewis, Washington; Fort Stewart, Georgia; Fort Hood, Texas; and the Presidio of Monterey, California. There appeared to be no consistent criteria applied for site selection in the task-force study in that the reasons for selection differed in each case. For example, Fort Lewis was selected in part because of command interest, and Fort Leonard Wood was selected because it is a training installation that represents the consolidation of training missions at a larger site. Study authors also considered 18 scenarios, 5 of which were Army-wide. These scenarios involved different assumptions about the number and pay grades of unaccompanied personnel housed on and off installations, as well as the amount of money spent by the Army to construct and sustain new facilities. The study concluded that privatization of housing for unaccompanied personnel was financially feasible at selected installations, such as Fort Stewart and Fort Hood, in part because a majority of senior enlisted personnel there were already receiving the BAH and living off the installation; however, all members of the leadership task force responsible for the study could not reach consensus on the study’s findings. For example, the study authors suggested that soldiers should not be mandatorily assigned to privatized housing. OMB scoring rules require that mandatory assignment be treated as an occupancy guarantee, which would have the effect of committing the government to a large long-term expenditure. However, other members of the task force questioned whether mandatory assignment might be necessary to support the building of cohesive units, which is fostered by working and living together as a team. The Army completed an additional analysis of privatization in response to a 2009 congressional inquiry. The Army prepared a report that focused on the privatization of housing for junior unaccompanied personnel. The analysis included a review of privatization’s effect on costs, soldiers’ quality of life, and the Army’s traditions and culture. The study was conducted over a 3-month period and included modeling scenarios at Fort Polk, Louisiana; Fort Irwin, California; and Fort Meade, Maryland, and one U.S.-wide extrapolation. The three locations were chosen because their barracks needed renovations and local commanders and private- sector developers supported privatization. The analysis concluded that privatization was feasible, but the cost to privatize barracks would be higher than what the Army was currently spending on barracks construction and sustainment. The Army also conducted a series of due diligence studies at Fort Benning, Georgia; Fort Irwin, California; Fort Knox, Kentucky; Fort Leonard Wood, Missouri; Fort Meade, Maryland; and Fort Polk, Louisiana, in April and May 2010. The purpose of these studies was, among other things, to assist the Army in determining the feasibility of implementing barracks privatization pilot projects. In July and August 2010, the results of the studies were condensed into business-case analyses to show the potential costs or savings the Army would experience at each of the six sites if barracks privatization projects were executed. According to the Army report, the bottom-line finding of the analyses was that such projects would result in a significant net cost to the Army if executed, because the Army was not funding all barracks requirements at 100 percent. The report further stated that the Army’s expected BAH payments would be greater than the actual barracks funding that was currently taking place. Fort Meade was the only exception of the six sites because less than 50 percent of the junior servicemembers were Army, but the Army was funding all barracks for all the services. The report concluded that the Army’s expected BAH payments at Fort Meade would be less than the current Army Military Construction, Operation and Maintenance, and Sustainment, Restoration and Modernization funding. Like the Army, the Navy developed analyses that considered multiple scenarios. In 2009, the Navy conducted a business-case analysis using three scenarios and data collected from site visits at San Diego, California; and Norfolk, Virginia; which were the only Navy locations with privatized projects for unaccompanied housing. The service used both quantitative and qualitative data, drawing on the pro forma financial statements and requests for proposals from the San Diego and Norfolk privatization projects, military construction budget, and BAH data from multiple years, as well as interviews with personnel across the Navy. The study compared three alternative scenarios with a baseline scenario. One of the scenarios involved privatization, another featured construction with military construction funds, and the third assumed the community provided the majority of the housing needs. Under the baseline scenario, the assumptions were that the Navy would own and operate all housing for unaccompanied personnel, and would underfund building maintenance and support. Briefings to leadership documented the analytical process and summarized the results of the study. The Navy analysis concluded that privatization of housing for unaccompanied personnel would be more cost-effective for housing junior sailors based on their receiving a higher partial rate of BAH (versus the full BAH rate), rather than building new quarters using traditional military construction funding. In 2011, 2 years after the initial analysis, the Navy reviewed the issue of the privatization of housing for unaccompanied personnel again and reached similar conclusions. The Navy study found that privatization requires lower operating costs than housing funded through annual appropriations requested through the military construction budgeting process and sustained at the required levels of operation and maintenance. However, the study noted that privatization of housing for unaccompanied personnel is only viable at select locations, such as where there is a stable population and a need to provide sailors with housing ashore when their ship is in its homeport. In such areas, enough population might exist to sustain the necessary level of occupancy in unaccompanied housing while sailors are at sea. The Air Force and Marine Corps analyses of whether to privatize unaccompanied personnel housing reviewed privatization at a few selected locations. The Air Force developed three analyses reviewing privatization over a 5-year period beginning in 1997. Air Force officials documented the analytical processes used through reports and memorandums and communicated the conclusions to service leadership in briefings. The first effort, the Dormitory Privatization Feasibility Study, lasted for 5 months and included site visits to two bases where data were collected for a feasibility analysis. The Air Force selected the two locations—Dover Air Force Base, Delaware; and Tinker Air Force Base, Oklahoma—from eight candidate bases nominated by the major commands, in part because both bases had housing shortages. Tinker had the largest housing shortage of the eight candidate bases with 59 percent of the total demand for unaccompanied housing unmet, compared with 12 percent at Dover, and both had rooms that would require future renovation or replacement. Based on post-site-visit financial analyses, the study authors found that privatization would be less expensive than traditional military construction at Tinker but not at Dover. A 51-year life-cycle cost comparison conducted in 1997, provided to us by Air Force officials, showed the cost of privatization at Tinker to be $163.7 million, compared with the military construction cost of $205.7 million. For Dover, the analysis showed a cost of $110.5 million for a traditional military construction approach compared with $132.5 million for privatization. The study authors concluded that privatization was more suitable for installations with a slow local economy, high installation and local support for privatization, degraded existing facilities, and a large unaccompanied housing shortage—conditions that existed at Tinker. Further, the study concluded that since privatization of housing for unaccompanied personnel was suitable only for certain locations, it could be used only to augment traditional military construction funding, not to replace it. Later, in 1997, the Air Force organized an exercise to discuss whether to use privatization as a tool to construct dormitories. The team conducting the exercise was composed of more than a dozen Air Force headquarters housing and installation officials. The team discussed the results of the Dormitory Privatization Feasibility Study, as well as other factors such as the effects of utilities, leasing, and mandatory assignment of personnel to privatized housing on OMB scoring, and leadership control over housing residents’ activities. The team recommended that the Air Force not pursue privatization to construct dormitories, primarily because the team found that privatization was not a cost effective alternative to using military construction funding for building dormitories. In 2002, 5 years later, another team composed of new members from all levels of the Air Force met to establish a baseline for an Air Force dormitory privatization program. This team also identified a number of issues, such as unit integrity, the scale of necessary government commitment of funds, enforcing discipline among tenants, and conducting inspections in a building that was not solely government-owned, that would make privatization projects unfeasible unless they were resolved. In an April 2000 memorandum, the Air Force Chief of Staff argued against privatizing unaccompanied personnel housing. The official indicated that residing in on-base dormitories ensures that junior enlisted personnel acclimate to the Air Force, build esprit de corps with members of their unit, and have access to base services such as medical, fitness, recreation, commissary, and exchange facilities. Ultimately, according to Air Force officials, the Air Force decided that military construction would meet their needs for housing and decided against using privatization. In 2008, the Marine Corps completed a feasibility analysis to decide whether to privatize housing for unaccompanied personnel at a single location—Camp Pendleton, California—as it lacked sufficient high-quality housing for unaccompanied personnel. The service documented this analysis in a briefing submitted to Marine Corps leadership and a memorandum prepared the following year. The feasibility analysis included an examination of the cash contributions required from the Navy, a participation test for the 336-bed project, and a life-cycle cost analysis. The feasibility analysis concluded that privatization of housing for unaccompanied personnel would be 55 percent more expensive than building new quarters using military construction funds. A 2009 Marine Corps summary on the subject of bachelor housing privatization noted that Marines are assigned to barracks with others from their unit, which promotes unit integrity and unit cohesion. However, the direct or mandatory assignment of servicemembers to privatized housing could be viewed as providing an occupancy guarantee to the developer, which under the OMB guidelines would require that the full value of the guarantee must be available and obligated “up front” at the time the government makes the commitment of resources. In interviews, Marine Corps officials stated that privatized housing is incompatible with Marine Corps culture because Marines do not deploy as individuals; they deploy as units. Moreover, E-1 to E-3 Marines, like E-1 to E-3 sailors on sea duty, are assigned to shared rooms. This configuration is an important element of the Marine Corps’ philosophy and goal of fostering team building, companionship, camaraderie, and unit cohesion, according to the 2010 report by the LMI company on unaccompanied personnel housing for junior enlisted members, which was commissioned by DOD to provide a comprehensive view of housing programs for unaccompanied personnel across the services. The Marine Corps conducted no additional analyses of privatization for unaccompanied personnel. Starting in 2008, the Marine Corps undertook a $2.8 billion military construction initiative to build new barracks over a 6-year period from fiscal year 2008 through fiscal year 2014. According to Marine Corps officials, the Marine Corps decided that military construction would meet its needs for housing and decided against using privatization. In addition to the three issues of OMB scoring, the life-cycle cost of government construction and operation of housing versus that of privatized construction and operation of housing, and unit integrity, the services’ analyses and our interviews with service officials identified three other factors that influenced the services’ decisions about whether to privatize housing for unaccompanied personnel: BAH: Most junior unaccompanied personnel without dependents are not eligible to receive a housing allowance (and, in the case of junior shipboard sailors, are not entitled by law to receive a BAH). Without the assurance of a steady stream of income from the BAH, which junior unaccompanied personnel could use to pay rent for privatized housing, private-sector developers would likely be unwilling to participate in privatized housing projects, the Army’s 2005 Unaccompanied Personnel Housing Privatization Task Force Study concluded. In interviews and in some analyses, such as the Army’s task-force study report, the services expressed reluctance to assume any additional costs, particularly a cost relating to personnel since such obligations to pay costs in the future must typically be funded at the time the obligation is made. In the Army’s privatization task-force report, the Army’s resource-management officials noted that even just a few pilot privatization projects could lock the Army into a 50-year BAH bill that must be funded, because the leases for privatization projects generally run for 50 years. The frequency or duration of unit deployments: With privatized family housing, the frequency of deployments of the servicemember generally does not affect the rent received because the servicemember’s family remains behind and maintains the leased property. However, unaccompanied personnel living in privatized housing do not receive the BAH when they are deployed, if they do not have a lease. Therefore, frequent or prolonged deployments can reduce the occupancy rates of privatized housing. Occupancy rates are a key indicator of a housing project’s financial viability. The uncertainty about the future size of the force: According to a 2012 DOD budget-priorities document, the department plans to reduce the size of the active Army from a post-9/11 peak of about 570,000 in fiscal year 2010 to 490,000 by fiscal year 2017, and the active Marine Corps from a peak of about 202,000 in fiscal year 2010 to 182,000 by fiscal year 2017. None of the services’ analyses discussed the current uncertainty about the future size of the force, partly because most of them were written before the current force-structure reductions were announced. These reductions may eliminate current housing deficits and create a disincentive for private-sector developers to participate in privatization projects. Between 1996 and 2013, the Army and the Navy implemented seven privatized unaccompanied personnel housing projects. As stated previously in this report, both the Army and Navy have also used military construction funding to upgrade and renovate their housing for unaccompanied personnel The Air Force and the Marine Corps have not used the privatization authorities, and are instead using military construction funds to improve the quality of their unaccompanied personnel housing. Air Force housing officials told us that the Air Force unaccompanied personnel housing inventory generally meets current housing needs. According to Marine Corps officials, the Marine Corps intends to eliminate existing housing deficiencies by demolishing inadequate unaccompanied personnel housing, and using military construction funds to replace or renovate such housing by the end of fiscal year 2014. According to Office of the Secretary of Defense and military-service housing officials, none of the services have plans to pursue any future privatized housing projects for unaccompanied personnel. The Army has projects to privatize housing for unaccompanied personnel at five locations. Four of these projects are at Fort Irwin, California; Fort Drum, New York; Fort Bragg, North Carolina; and Fort Stewart, Georgia. At each of these locations, sufficient adequate and affordable housing was not available off the installation. These projects were intended to house unaccompanied personnel at pay grades E-6/Staff Sergeant and above, who are eligible to receive the BAH. In 2012, the Army made a decision to implement a fifth privatization project at Fort Meade, Maryland, for unaccompanied personnel E-5/Sergeant and below. These junior unaccompanied personnel currently receive the BAH and are living off the installation because Fort Meade does not have enough housing for unaccompanied personnel on-site. The initial development cost for the Army projects was about $219 million, all of which was incurred by the privatized housing project companies.costs generally included the costs of construction and project financing. The Army’s investment in the projects was in the form of land leased to the privatized housing project companies to serve as the sites for the projects. Table 2 summarizes the status of the Army’s five projects to privatize housing for unaccompanied personnel. In 2002, Congress amended the MHPI to provide the Navy with the authority to carry out not more than three pilot projects using the private sector for the acquisition or construction of unaccompanied personnel housing. The amendment to the MHPI also authorized the payment of higher rates of partial BAH to personnel occupying housing acquired using the pilot authority. The Navy implemented two such projects—at San Diego, California, and at Hampton Roads, Virginia—before its pilot authority expired on September 30, 2009. According to Navy officials, these locations were selected because both are fleet concentration areas, and the privatization projects also support the Navy’s Homeport Ashore Program. The Navy’s projects include 8 existing buildings (1 at San Diego and 7 at Hampton Roads) that the Navy conveyed to the private- sector developer and 91 new buildings (3 at San Diego; 1 mid-rise building and 87 “manor homes,” each consisting of five two-bedroom apartments, at Hampton Roads). San Diego has 2,398 bedrooms, while Hampton Roads has 3,682 bedrooms, for a total of 6,080 bedrooms. The development costs for both projects totaled around $1.1 billion, of which the Navy provided cash equity investments of about $80 million, with the developers providing about $1 billion. The developers’ costs generally included the costs of construction, project financing, and operating expenses. Table 3 summarizes the status of the Navy’s two projects to privatize housing for junior and mid-level unaccompanied personnel. Details about each project follow. In December 2006, the Navy awarded its first pilot project to privatize housing for junior unaccompanied personnel at Naval Station San Diego, California. The project included the privatization of one existing building and the construction of three new buildings. According to Navy officials, the existing building includes 258 “modules” built to the 1995 DOD standards for housing for unaccompanied personnel, each featuring two sleeping rooms and a small common area. The new buildings include 941 “market-style” two-bedroom apartments, and, in total, the San Diego project provides 2,398 bedrooms. The existing building, which was conveyed to the developer, was intended to house junior unaccompanied personnel (E-4/ Petty Officer Third Class and below), and the new buildings were intended to house mid-level unaccompanied personnel (E- 4/ Petty Officer Third Class with more than 4 years of service to E-6/Petty Officer First Class). The three new buildings became available to rent within a 4-month period, beginning in December 2008. Navy officials told us that delivering 1,882 new beds within 4 months caused significant occupancy challenges, and that the target population of E-4 to E-6 has never been realized because of the on-base location of the project. In addition, they stated that while the sailors recognize the superior facilities and amenities, they are reluctant to return to quarters inside the installation’s fence line with restricted access for their friends and family. Therefore, the private-sector developer and the Navy decided to temporarily expand the target demographic from E-4 with more than 4 years of service through E-6 to now include Homeport Ashore sailors and junior shore-based sailors (E-4 and below). According to Navy officials, this shift has largely solved the occupancy challenges, yet it has strained revenues for the private developer, as Homeport Ashore sailors receive only a partial BAH rate based on the market rent for the existing building, but the private-sector developer’s financial projections were based on the market rent for the new buildings. The new buildings were constructed to higher standards compared with the existing one, and have a higher rent structure that is equivalent to current market rents for comparable housing in the San Diego area. The Navy’s evaluation of the developer’s proposed budget for 2013 noted that although the overall occupancy rate for the San Diego project at the end of 2012 was about 96 percent, the revenues being received were insufficient to sustain the project over the long term. Therefore, in June 2013, the Office of the Assistant Secretary of the Navy (Energy, Installations and Environment) requested the Under Secretary of Defense (Personnel and Readiness) to authorize a higher partial rate of the BAH for junior unaccompanied sailors residing in the new buildings. The higher partial rate of BAH requested would be equivalent to the market rents for the new buildings. In September 2013, the Office of the Assistant Secretary of Defense approved the Navy’s request for an increase in the partial rate for the BAH. The private-sector developer’s cost for the project was about $321 million, with the Navy providing a cash equity investment of about $43 million for a total of about $364 million. Figure 1 shows a bedroom in one of the new buildings at San Diego, California. The Navy’s Hampton Roads, Virginia, project, awarded in December 2007, was built to house junior unaccompanied personnel (E-4/Petty Officer Third Class with fewer than 4 years of service and below). The project included 7 existing buildings on two installations (Naval Station Norfolk, Virginia, and Naval Support Activity, Norfolk, Virginia) that were conveyed to the developer and 88 newly constructed buildings on three separate locations off the installation. Although the new buildings are off the installation, two locations are on Navy property leased to the developer, and one location (Newport News) is on land donated by the city. In total, the Hampton Roads project includes 1,913 apartments and 3,682 bedrooms. Specifically, the 7 existing buildings include 723 apartments and 1,315 bedrooms, and the 88 new buildings include 1,190 apartments and 2,367 bedrooms. According to Navy officials, the Hampton Roads project initially struggled to meet lease expectations because of the reluctance of commanding officers to allow sailors off their ships. As a result, the Commander, Naval Surface Force Atlantic, directed commanding officers to comply with the Navy’s Homeport Ashore initiative by allowing sailors to move off the ship. The private-sector developer’s cost for the project was about $713 million, with the Navy providing a cash equity investment of $37 million for a total of about $750 million. According to data provided by the Navy, the project’s average occupancy rate is about 94 percent. We are not making any recommendations in this report. DOD opted not to provide formal comments on a draft of this report, but provided technical comments, which were incorporated into this report as appropriate. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; and the Commandant of the Marine Corps. In addition, this report will be available at no charge on our website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (404) 679-1875 or curriec@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix I. In addition to the contact named above, Kimberly Seay, Assistant Director; Vijay J. Barnabas; Julie Corwin; Mae Jones; Barbara Joyce; Carol Petersen; Michael Silver; and Michael Willems made key contributions to this report. Defense Infrastructure: Improved Guidance Needed for Estimating Alternatively Financed Project Liabilities. GAO-13-337. Washington, D.C.: April 18, 2013. Military Housing: Enhancements Needed to Housing Allowance Process and Information Sharing Among Services. GAO-11-462. Washington, D.C.: May 16, 2011. Military Housing Privatization: DOD Faces New Challenges Due to Significant Growth at Some Installations and Recent Turmoil in the Financial Markets. GAO-09-352. Washington, D.C.: May 15, 2009. Military Housing: Management Issues Require Attention as the Privatization Program Matures. GAO-06-438. Washington, D.C.: April 28, 2006. Military Housing: Further Improvements Needed in Requirements Determination and Program Review. GAO-04-556. Washington, D.C.: May 19, 2004. Military Housing: Better Reporting Needed on the Status of the Privatization Program and the Costs of Its Consultants. GAO-04-111. Washington, D.C.: October 9, 2003. Military Housing: Opportunities That Should Be Explored to Improve to Housing and Reduce Costs for Unmarried Junior Servicemembers. GAO-03-602. Washington, D.C.: June 10, 2003. Military Housing: Management Improvements Needed as the Pace of Privatization Quickens. GAO-02-624. Washington, D.C.: June 21, 2002. Military Housing: DOD Needs to Address Long-Standing Requirements Determination Problems. GAO-01-889. Washington, D.C.: August 3, 2001. Military Housing: Continued Concerns in Implementing the Privatization Initiative. GAO/NSIAD-00-71. Washington, D.C.: March 30, 2000. Military Housing: Privatization Off to a Slow Start and Continued Management Attention Needed. GAO/NSIAD-98-178. Washington, D.C.: July 17, 1998.
Partly in response to concerns that inadequate housing might be contributing to servicemembers' decisions to leave the military, Congress enacted the MHPI in 1996. The initiative gave the Department of Defense (DOD) legal authorities to replace or renovate inadequate housing for unaccompanied military personnel (those without dependents) and military families using private-sector financing, ownership, operation, and maintenance. Certain military personnel receive the BAH, which can be used to pay rent to live in privatized housing. Since 1996, DOD has built and modernized on-installation unaccompanied personnel housing using military construction funds. According to a February 2013 DOD report to Congress, from fiscal years 1996 through 2012, DOD spent over $20 billion of military construction funds to build and modernize on-installation housing for unaccompanied military personnel. GAO was asked to review DOD's efforts to privatize unaccompanied housing. GAO discusses the (1) analyses the military services conducted to make decisions about privatizing housing for unaccompanied personnel and (2) status of housing projects the military services have privatized for unaccompanied personnel. GAO obtained and reviewed fiscal years 1996-2013 housing plans and analyses the services conducted, reviewed information on privatization projects, and interviewed DOD and service officials. GAO is not making recommendations in this report. Since Congress enacted the Military Housing Privatization Initiative (MHPI) in 1996, the military services conducted several analyses and considered other factors to determine whether to privatize housing for unaccompanied personnel. These analyses were conducted between 1997 and 2011. The Army's and the Navy's analyses compared different scenarios--such as whether to rely on privatization or use traditional military construction funding to improve housing quality--and considered information from multiple installations in these analyses. In contrast, the Air Force and Marine Corps analyzed the feasibility of privatizing unaccompanied housing at a few selected installations. For example, the Air Force based its initial analysis on information for two locations, while the Marine Corps based its 2008 analysis on information specific to one installation. The Navy and Army concluded that privatization could be used under a narrow set of circumstances at specific installations, such as where unaccompanied servicemembers were already receiving the basic allowance for housing (BAH). The Air Force and Marine Corps concluded that privatization was not suitable for meeting any of their housing needs. For example, an April 2000 Air Force memorandum indicated that privatization could have a negative effect on building unit cohesion. Other factors also played a role in the four services' decisions about whether to privatize housing, including (1) the limited availability of the BAH for junior unaccompanied personnel, which may result in not having a dedicated stream of income to pay rent for privatized housing; (2) the frequency or duration of unit deployments, which could affect the occupancy rates of unaccompanied housing; and (3) uncertainty about the future size of the military, and whether there would be sufficient demand for privatized housing. Between 1996 and 2013, the Army and Navy implemented seven privatized unaccompanied personnel housing projects. The Air Force and Marine Corps have not used the privatization authorities, and are instead using military construction funds to improve the quality of their unaccompanied personnel housing. Air Force housing officials told us that Air Force unaccompanied personnel housing inventory generally meets current housing needs. According to Marine Corps officials, the Marine Corps intends to eliminate existing housing deficiencies by demolishing inadequate unaccompanied personnel housing and using military construction funds to replace or renovate housing by the end of fiscal year 2014. According to Office of the Secretary of Defense and military service housing officials, none of the services have plans to pursue any future privatized housing projects for unaccompanied personnel. GAO is not making recommendations in this report.
Traditionally, the federal government has used a variety of access control techniques to protect its facilities and computer systems. Visual authentication of ID cards has typically been used as a way to control access to physical facilities. However, smart card technology can help authenticate the identity of an individual in a substantially more rigorous way than is possible with traditional ID cards. Such cards can provide higher levels of assurance for controlling access to facilities as well as computer systems and networks. Access control is the process of determining the permissible activities of users and authorizing or prohibiting activities by each user. Controlling a user’s access to facilities and computer systems includes setting rights and permissions that grant access only to authorized users. There are two types of access control: physical access and logical access. Physical access control focuses on restricting the entry and exit of users into or out of a physical area, such as a building or a room in a building. Physical access control techniques include devices such as locks that require a key to open doors or ID cards that establish an individual’s authorization to enter a building. Logical access control is used to determine what electronic information and systems users and other systems may access and what may be done to the information that is accessed. Methods for controlling logical access include requiring a user to enter a password to access information stored on a computer. Access control techniques vary in the extent to which they can provide assurance that only authorized individuals and systems have been granted access. Some techniques can be easily subverted, while others are more difficult to circumvent. Generally, techniques that provide higher levels of assurance are more expensive, more difficult to implement, and may cause greater inconvenience to users than techniques that provide lower levels of assurance. When deciding which access control mechanisms to implement, agencies must first understand the level of risk associated with the facility or information that is to be protected. The higher the risk level, the greater the need for agencies to implement a high-assurance-level access control system. One means to implement a high-assurance-level access control system is through the use of smart cards. Smart cards are plastic devices that are about the size of a credit card and contain an embedded integrated circuit chip capable of storing and processing data. The unique advantage that smart cards have over traditional cards with simpler technologies, such as magnetic strips or bar codes, is that they can exchange data with other systems and process information, rather than simply serving as static data repositories. By securely exchanging information, a smart card can help authenticate the identity of the individual possessing the card in a far more rigorous way than is possible with traditional ID cards. A smart card’s processing power also allows it to exchange and update many other kinds of information with a variety of external systems, which can facilitate applications such as financial transactions or other services that involve electronic record-keeping. In addition to providing ways to enhance security for federal facilities, smart cards also can be used to significantly enhance the security of an agency’s computer systems by tightening controls over user access. Users wishing to log on to a computer system or network with controlled access must “prove” their identity to the system—a process called authentication. Many systems authenticate users by requiring them to enter secret passwords. This requirement provides only modest security because passwords can be easily compromised. Substantially better user authentication can be achieved by supplementing passwords with smart cards. To gain access under this scenario, a user is prompted to insert a smart card into a reader attached to the computer, as well as type in a password. This authentication process is significantly harder to circumvent because an intruder not only would need to guess a user’s password but would also need to possess a smart card programmed with the user’s information. Even stronger authentication can be achieved by using smart cards in conjunction with biometrics. Smart cards can be configured to store biometric information (such as fingerprints or iris scans) in an electronic record that can be retrieved and compared with an individual’s live biometric scan as a means of verifying that person’s identity in a way that is difficult to circumvent. An information system requiring users to present a smart card, enter a password, and verify a biometric scan uses what is known as “three-factor authentication,” which requires users to authenticate themselves by means of “something they possess” (the smart card), “something they know” (the password), and “something they are” (the biometric). Systems employing three-factor authentication provide a relatively high level of security. The combination of a smart card used with biometrics can provide equally strong authentication for controlling access to physical facilities. Smart cards can also be used in conjunction with public key infrastructure (PKI) technology to better secure electronic messages and transactions. PKI is a system of computers, software, and data that relies on certain cryptographic techniques to protect sensitive communications and transactions. A properly implemented and maintained PKI can offer several important security services, including assurances that (1) the parties to an electronic transaction are really who they claim to be, (2) the information has not been altered or shared with any unauthorized entity, and (3) neither party will be able to wrongfully deny taking part in the transaction. PKI systems are based on cryptography and require each user to have two different digital “keys” to gain access: a public key and a private key. The public key is used to encrypt information, making it unintelligible to any unauthorized recipients. It is called “public” because it is made freely available to any users or systems that wish to be able to authenticate the user. To decrypt the information requires the private key, which is kept confidential on the user’s smart card. If a user’s card is able to successfully decrypt a message that was encrypted using the user’s public key, then the authenticity of the user’s smart card is proven. Public and private keys for PIV cards are generated by the card at the time it is issued. Security experts generally agree that PKI technology is most effective when used in tandem with hardware tokens, such as smart cards. PKI systems use cryptographic techniques to generate and issue electronic “certificates,” which contain information about the identity of the users, as well as the users’ public keys. The certificates are then used to verify digital signatures and facilitate data encryption. The certification authority that issues the certificates is also responsible for maintaining a certificate revocation list, which provides status information on whether the certificate is still valid or has been revoked or suspended. The PKI software in the user’s computer can verify that a certificate is valid by first verifying that the certificate has not expired, and then by checking the certificate revocation list or online status information to ensure it has not been revoked or suspended. In August 2004, the President issued HSPD-12, which directed Commerce to develop a new standard for secure and reliable forms of ID for federal employees and contractor personnel by February 27, 2005. The directive defined secure and reliable ID as meeting four control objectives. Specifically, the identification credentials were to be  based on sound criteria for verifying an individual employee’s or contractor personnel’s identity; strongly resistant to identity fraud, tampering, counterfeiting, and terrorist exploitation;  able to be rapidly authenticated electronically; and issued only by providers whose reliability has been established by an official accreditation process.  HSPD-12 stipulated that the standard must include criteria that are graduated from “least secure” to “most secure” to ensure flexibility in selecting the appropriate level of security for each application. In response to HSPD-12, Commerce’s NIST published FIPS 201, Personal Identity Verification of Federal Employees and Contractors, on February 25, 2005. The standard specifies the technical requirements for PIV systems to issue secure and reliable ID credentials to federal employees and contractor personnel for gaining physical access to federal facilities and logical access to information systems and software applications. Smart cards are a primary component of the envisioned PIV system. The FIPS 201 standard is composed of two parts. The first part, called PIV-I, sets standards for PIV systems in three areas: (1) identity proofing and registration, (2) card issuance and maintenance, and (3) protection of card applicants’ privacy. The second part of the FIPS 201 standard, PIV- II, provides technical specifications for the implementation and use of interoperable smart cards in PIV systems. To verify individuals’ identities, under PIV-I, agencies are directed to adopt an accredited identity proofing and registration process that is approved by the head of the agency. There are many steps to the verification process, such as completing a background investigation of the applicant, conducting a fingerprint check prior to credential issuance, and requiring applicants to provide two original forms of identity source documents from an OMB-approved list of documents. Agencies are also directed to adopt an accredited card issuance and maintenance process that is approved by the head of the agency. This process should include standardized specifications for printing photographs, names, and other information on PIV cards and for other activities, such as capturing and storing biometric and other data, and issuing, distributing, and managing digital certificates. Finally, agencies are directed to perform activities to protect the privacy of the applicants, such as assigning an individual to the role of “senior agency official for privacy” to oversee privacy-related matters in the PIV system; providing full disclosure of the intended uses of the PIV card and related privacy implications to the applicants; and using security controls described in NIST guidance to accomplish privacy goals, where applicable. The second part of the FIPS 201 standard, PIV-II, provides technical specifications for the implementation and use of interoperable smart cards in PIV systems. The components and processes in a PIV system, as well as the identity authentication information included on PIV cards, are intended to provide for consistent authentication methods across federal agencies. The PIV-II cards (see example in fig. 1) are intended to be used to access all federal physical and logical environments for which employees are authorized. Appendix II provides more information on the specific requirements and components of PIV-II. Doe D John, B. The PIV cards contain a range of features—including a common appearance, security features, photographs, cardholder unique identifiers (CHUID), fingerprints, and PKI certificates—to enable enhanced identity authentication at different assurance levels. To use the enhanced electronic capabilities, specific infrastructure needs to be in place. This infrastructure may include biometric (fingerprint) readers, personal ID number (PIN) input devices, and connections to information systems that can process PKI digital certificates and the CHUIDs. Once acquired, these various devices need to be integrated with existing agency systems. For example, PIV system components may need to interface with human resources systems, so that when an employee resigns or is terminated and the cardholder’s employment status is changed in the human resources systems, the change is also reflected in the PIV system. Furthermore, card readers that are compliant with FIPS 201 need to exchange information with existing physical and logical access control systems in order to enable doors and systems to unlock once a cardholder has been successfully authenticated and access has been granted. HSPD-12 guidance—including OMB guidance, FIPS 201, and other NIST guidance—allows for several different types of authentication that provide varying levels of security assurance. For example, simple visual authentication of PIV cards offers a rudimentary level of security, whereas verification of the biometric identifiers contained in the credential provides a much higher level of assurance. OMB and NIST guidance direct agencies to use risk-based methods to decide which type of authentication is appropriate in any given circumstance. Because visual authentication provides very limited assurance, OMB has directed that use of visual authentication be minimized. OMB guidance issued in February 2011 further stated that agencies were in a position to aggressively step up their efforts to use the electronic capabilities of PIV cards and should develop policies to require their use as the common means of authentication for access to agency facilities, networks, and information systems. Examples of approved methods for using PIV cards for authentication and associated assurance levels are described in table 1. In addition to the authentication capabilities discussed in table 1, PIV cards also support the use of PIN authentication, which may be used in conjunction with one of these capabilities. For example, the PIN can be used to control access to biometric data on the card when conducting a fingerprint check. NIST issued several special publications that provide supplemental guidance on various aspects of the FIPS 201 standard, including guidance on verifying that agencies or other organizations have the proper systems and administrative controls in place to issue PIV cards and have the technical specifications for implementing the directed encryption technology. Additional information on NIST’s special publications is provided in appendix III. In addition, NIST developed a suite of tests to be used by approved commercial laboratories to validate whether commercial products for the PIV card and the card interface are in conformance with FIPS 201. These laboratories use the NIST test to determine whether individual commercial products conform to FIPS 201 specifications. Once commercial products pass conformance testing, they must then go through performance and interoperability testing. GSA developed these tests to ensure that products and services meet FIPS 201 requirements. GSA tests products that have successfully passed NIST’s conformance tests as well as other products as directed by FIPS 201 but which are not within the scope of NIST’s conformance tests, such as PIV card readers, fingerprint capturing devices, and software directed to program the cards with employees’ data. Products that successfully pass GSA’s conformance tests are included on its list of products that are approved for agencies to acquire. OMB is responsible for ensuring that agencies comply with the standard. OMB’s 2005 memorandum to executive branch agencies outlined instructions for implementing HSPD-12 and the new standard. The memorandum specified to whom the directive applies; to what facilities and information systems FIPS 201 applies; and, as outlined in the following text, the schedule that agencies must adhere to when implementing the standard.  October 27, 2005. For all new employees and contractor personnel, adhere to the identity proofing, registration, card issuance, and maintenance requirements of the first part (PIV-I) of the standard.  October 27, 2006. Begin issuing cards that comply with the second part (PIV-II) of the standard and implementing the privacy requirements.  October 27, 2007. Verify and/or complete background investigations for all current employees and contractor personnel who have been with the agency for 15 years or less. Issue PIV cards to these employees and contractor personnel and require that they begin using their cards by this date.  October 27, 2008. Complete background investigations for all individuals who have been federal agency employees for more than 15 years. Issue cards to these employees and require them to begin using their cards by this date. In addition, OMB directed that each agency provide certain information on its plans for implementing HSPD-12, including the number of individuals requiring background checks and the dates by which the agency planned to be compliant with PIV-I and PIV-II requirements. OMB required agencies to post quarterly reports beginning on March 1, 2007, on their public websites showing the number of background checks that had been completed and PIV credentials that had been issued. Each quarter, OMB has posted a summary report of the governmentwide implementation status of HSPD-12 on its website. After determining that a number of agencies were going to have difficulties in meeting the original deadlines for card issuance, OMB requested in fiscal year 2008 that agencies confirm that their previous plans were still on target or provide updated plans with revised schedules for meeting the requirements of HSPD-12 and the OMB memoranda. Other related guidance that OMB issued includes guidance to federal agencies on electronic authentication practices, sample privacy documents for agency use in implementing HSPD-12, a memorandum to agencies about validating and monitoring agency issuance of PIV credentials, guidance on protecting sensitive agency information, a memorandum to agencies on safeguarding against and responding to a breach of personally identifiable information, and updated instructions to agencies on publicly reporting their HSPD-12 implementation status. On June 30, 2006, OMB issued a memorandum to agency officials that provided updated guidance for the acquisition of products and services for the implementation of HSPD-12. Specifically, OMB provided acquisition guidance for FIPS 201-compliant commercial products that have passed, among other tests, NIST’s conformance tests and GSA’s performance and conformance tests. For example, OMB referred agencies to a special item number on GSA’s IT Schedule 70 for the acquisition of approved HSPD-12 implementation products and services, noting that all products and services offered under the special item number had been evaluated and determined to be in compliance with governmentwide requirements. When agencies acquire HSPD-12 products and services through acquisition vehicles other than the specified GSA schedule, the OMB memo required them to ensure that only approved products and services were acquired and to ensure compliance with other federal standards and requirements for systems used to implement HSPD-12. In addition, GSA established a managed service office that offers shared services to federal civilian agencies to help reduce the costs of procuring FIPS 201-compliant equipment, software, and services by sharing some of the infrastructure, equipment, and services among participating agencies. According to GSA, the shared service offering—referred to as the USAccess Program—is intended to provide several services, such as producing and issuing the PIV cards. As of April 2011, GSA had 90 agency customers with more than 591,000 government employees and contractor personnel to whom cards were issued through shared service providers. In addition, as of April 2011, the Managed Service Office had installed over 385 enrollment stations with 18 agencies actively enrolling employees and issuing PIV cards. While there are several services offered by the office, it is not intended to provide support for all aspects of HSPD-12 implementation. For example, the office does not provide services to help agencies integrate their physical and logical access control systems with their PIV systems. In 2006, GSA’s Office of Governmentwide Policy and the federal Chief Information Officers (CIO) Council established the interagency HSPD- 12 Architecture Working Group, which is intended to develop interface specifications for HSPD-12 system interoperability across the federal government. As of April 2011, the group had issued 13 interface specification documents, including a specification for exchanging data between an agency and a shared service provider. In February 2006, we reported that agencies faced several challenges in implementing HSPD-12, including constrained testing time frames and funding uncertainties as well as incomplete implementation guidance. We recommended that OMB monitor agencies’ implementation process and completion of key activities. In response to this recommendation, beginning on March 1, 2007, OMB directed agencies to post to their public websites quarterly reports on the number of PIV cards they had issued to their employees, contractor personnel, and other individuals. In addition, in August 2006, OMB directed each agency to submit an updated implementation plan. We also recommended that OMB amend or supplement governmentwide guidance pertaining to the extent to which agencies should make risk-based assessments regarding the applicability of FIPS 201. OMB did not implement this recommendation. In February 2008, we reported that much work had been accomplished to lay the foundations for implementation of HSPD-12 but that agencies had made limited progress in implementing and using PIV cards. In addition, we noted that a key factor contributing to agencies’ limited progress was that OMB had at the time emphasized the issuance of cards and not the full use of the cards’ capabilities. We recommended that OMB establish realistic milestones for full implementation of the infrastructure needed to best use the electronic capabilities of PIV cards in agencies. We also recommended that OMB require agencies to align the acquisition of PIV cards with plans for implementing their technical infrastructure to best use the cards’ electronic authentication capabilities. In February 2011, OMB directed agencies to issue implementation policies by March 31, 2011, through which the agencies will require use of the PIV credentials as the common means of authentication for access to agency facilities, networks, and information systems. Agencies were instructed to include the following requirements, among others, in their policies: all new systems under development must be able to use PIV credentials prior to being made operational, existing physical and logical access control systems must be upgraded to use PIV credentials, and agency processes must accept and electronically verify PIV credentials issued by other federal agencies. Overall, OMB and federal agencies have made mixed progress in implementing HSPD-12 requirements aimed at establishing a common identification standard for federal employees and contractor personnel. On the one hand, the federal CIO Council, OMB, and NIST have issued guidance to agencies specifying milestones for conducting background investigations and issuing PIV cards as well as requirements for implementing the electronic authentication capabilities of the cards. Also, agencies have made substantial progress in conducting background investigations and issuing PIV cards. However, a few agencies reported that background investigations and card issuance for contractor personnel and “other” staff—defined by OMB as short-term employees (less than 6 months on the job), guest researchers, volunteers, and intermittent, temporary, or seasonal employees—were not as complete. Additionally, agencies have made fair progress in implementing the electronic capabilities of the PIV card for physical access to their facilities. While they have generally begun using PIV cards for access to their headquarters buildings, most have not implemented the same capabilities at their major field office facilities. Further, limited progress has been made in using PIV cards for access to agency information systems. Several agencies have taken steps to acquire and deploy hardware and software allowing users to access agency information systems via PIV cards, but none have fully implemented the capability. Lastly, agencies have made minimal progress in achieving the goal of interoperability among agencies, having generally not established systems and procedures for universally reading and electronically validating PIV cards issued by other federal agencies. While early HSPD-12 guidance from OMB focused on completion of background investigations and issuance of PIV cards, beginning in 2008 the federal CIO Council, OMB, and NIST took actions to more fully address HSPD-12 implementation, including focusing on the use of the electronic capabilities of the cards for physical and logical access control. In November 2009, the federal CIO Council issued the Federal Identity, Credential, and Access Management Roadmap and Implementation Guidance, which established a common framework for agencies to use in planning and executing identity, credential, and access management programs. The roadmap went further than previous guidance in providing guidance to agencies on complete operational scenarios involving HSPD- 12 authentication. It also outlined strategies for developing a standardized identity and access management system across the federal government and defined “use cases” and transition milestones to assist agencies in implementing the identity, credential, and access management architecture. For example, the roadmap’s use cases addressed topics such as “Create, Issue, and Maintain PIV Card,” “Grant Physical Access to Employee or Contractor,” and “Grant Visitor or Local Access to Federally-Controlled Facility or Site.” These use cases specified detailed models for agencies to follow in designing processes to carry out these functions. In May 2008, OMB issued guidance to agencies on preparing or refining plans for incorporating the use of PIV credentials with physical and logical access control systems. The guidance included a checklist of questions for agencies to consider when planning for the use of PIV credentials with physical and logical access control systems. Examples of the questions include:  Does your agency have a documented plan for incorporating the use of PIV credentials for both physical and logical access control?  Does your agency have policy, implementing guidance, and a process in place to track progress toward the appropriate use of the PIV credentials?  Does your plan include a process for authorizing the use of other agency PIV credentials to gain access to your facilities and information systems?  Has your agency identified all physical access points where you intend to require access using the electronic capabilities of the PIV credentials?  Has your agency performed the analyses to identify the changes that must be made to upgrade its systems’ capabilities to support use of the electronic capabilities of the PIV credentials for physical access? Further, in February 2011, OMB issued guidance that reiterated agency responsibilities for complying with HSPD-12 and specified new requirements. OMB required agencies to develop implementation policies by March 31, 2011, through which the full use of PIV credentials for access to federal facilities and information systems would be required. The implementation policies were required to include the following provisions:  effective immediately, enable the use of PIV credentials in all new  effective as of the beginning of fiscal year 2012, upgrade all existing physical and logical access control systems to use PIV cards before investing in other activities;  procure all services and products for facility and system access control in accordance with HSPD-12 policy;  accept and electronically verify PIV credentials issued by other federal  align HSPD-12 implementation plans with the federal CIO Council’s Federal Identity, Credential, and Access Management Roadmap. OMB’s February 2011 guidance was much more explicit than its previous HSPD-12 guidance in requiring agencies to make use of the electronic capabilities of PIV cards. The guidance noted that the majority of the federal workforce, as of December 2010, was in possession of PIV credentials and thus agencies were in a position to aggressively step up their efforts to use the electronic capabilities of the credentials. Lastly, beginning in fiscal year 2010, OMB required agencies to report detailed security metrics, including PIV card usage status for both logical and physical access, through the Federal Information Security Management Act Cyberscope system, which is designed to capture operational pictures of agency systems and provide insight into agency information security practices. In 2008, NIST issued guidance on using PIV credentials in physical access control systems. The guidance provided a detailed analysis of threat considerations, PIV authentication mechanisms, and potential use cases, so that agencies would be able to determine what specific physical access control system architectures to implement at their facilities. Specifically, this guidance discusses various PIV card capabilities, so that risk-based assessments can be made and appropriate PIV authentication mechanisms selected to manage physical access to federal government facilities. FIPS 201 requires agencies to adopt an accredited proofing and registration process that includes, among other things, initiating or completing a background investigation or ensuring that one is on record for all employees and contractor personnel before they are issued PIV cards. The standard requires agencies to adopt an accredited card issuance and maintenance process. Based on this requirement, in August 2005, OMB directed agencies to verify or complete background investigations for all employees, contractor personnel, and other staff seeking access to federal facilities and information systems and issue PIV cards for their use by October 2008. We reported in February 2008 that agencies had generally completed background checks for most of their employees and contractor personnel. Since 2008, agencies have made further progress in completing background investigations for the majority of personnel requiring them. Three of the agencies that we reviewed, DHS, HUD, and NRC, had successfully completed background investigations for all such personnel, including employees and contractor staff. All of the remaining five agencies—Commerce, Interior, Labor, NASA, and USDA—had completed investigative checks for over 85 percent of their employees and contractor staff. Figure 2 shows the eight agencies’ progress from 2008 to 2011 in conducting required background investigations for all staff requiring them, such as employees, contractor staff, and other staff. While agencies have made progress overall in completing background investigations for most of their employees, several agencies still have not completed all required investigations. These agencies reported that background investigations for contractor and other staff were often not as complete as investigations for employees. According to officials at Interior and Labor, the high turnover rate of these staff is one of the key contributing factors to their inability to maintain completed background investigations for higher percentages of these staff. Likewise, according to a USDA official, a large number of seasonal employees are hired each year, particularly in the firefighting season, and it is difficult to maintain a high percentage of completed background checks for these types of employees. Figure 3 shows agencies’ completion rates of background checks for employees, contractor personnel, and other personnel as of March 2011. Since 2008, agencies have also made substantial progress in issuing PIV cards to employees and other personnel requiring them. Of the eight agencies we reviewed, two (HUD and NRC) have issued PIV card credentials to their entire workforce, and two (Labor and NASA) have issued PIV cards to at least 93 percent of their personnel requiring such credentials. The other four agencies (Commerce, DHS, Interior, and USDA) have issued cards to between 69 percent and 80 percent of their personnel requiring credentials. According to Commerce officials, the department’s issuance numbers were low (69 percent) specifically because its U.S. Patent and Trademark Office (USPTO) had been slow to issue PIV credentials. Unlike the rest of Commerce, USPTO did not rely on GSA’s Managed Services Office for card issuance. According to these officials, USPTO was given permission to use its existing PKI infrastructure to issue PIV cards, which has taken extra time. Commerce officials said they expected to complete issuance of PIV cards to all staff requiring cards by May 2012. DHS had issued PIV cards to about 80 percent of its workforce as of March 31, 2011. In response to OMB’s call for implementation plans from agencies in 2008, DHS submitted a plan that foresaw completion of card issuance by December 31, 2010. However, DHS did not meet the revised deadline. The department’s Office of Inspector General reported in January 2010 that the slow progress was the result of weak program management, including insufficient funding and resources, and a change in implementation strategy from a component-by-component to a centralized approach. At the time of our review, the department was working to meet a new deadline of September 30, 2011, to complete issuance of PIV cards. Interior officials stated that the department’s issuance numbers were low (74 percent) due to difficulties in issuing cards to personnel in remote field offices. According to these officials, 400 to 500 locations have been identified to be serviced by “mobile” PIV credentialing stations. Before credentialing can be done at these locations, local staff must be trained and certified in performing registration duties. Interior officials stated that they intended to establish target completion dates for issuing credentials at these locations but had not yet done so. USDA officials said their department had previously focused on issuing PIV cards to employees and that many of its component agencies had not established roles and responsibilities for issuing PIV cards to contractor and other staff until fiscal year 2011. According to these officials, the proper management structure is now in place and PIV cards are to be issued to the majority of contractor and other staff by the end of fiscal year 2011. Figure 4 shows agencies’ progress in issuing PIV cards to all staff requiring cards, such as employees, contractor staff, and other staff, between 2008 and 2011. Contractor and other staff, such as temporary and seasonal employees, are a substantial portion of federal agency and department personnel and often require access to agency facilities and information systems. However, agencies have not made as much progress issuing PIV cards to their contractor and other staff as they have for their employees. Based on data provided by agencies, the eight agencies we reviewed issued PIV credentials to a total of 91 percent of their employees, 69 percent of their contractor personnel, and 35 percent of their other personnel as of March 2011. Among the eight agencies reviewed, three (HUD, NASA, and NRC) have issued PIV credentials to at least 90 percent of their contractor personnel. The remaining five have lower issuance numbers varying between 32 percent and 74 percent. According to agency officials, the constant turnover of contractor and other personnel makes it more difficult to ensure that cards are issued to all such staff needing them. Figure 5 illustrates agencies’ progress in issuing PIV cards to employees, contractor personnel, and other personnel as of March 2011. HSPD-12 states that agencies shall require the use of the PIV credentials for access to federal facilities to the maximum extent practicable. OMB’s 2005 guidance directed agencies to make risk-based determinations about the type of authentication mechanisms to deploy at their facilities but specified “minimal reliance” on visual authentication as a sole means of authenticating PIV credentials. FIPS 201 and NIST guidance on using PIV credentials in physical access systems also both state that visual authentication provides only a basic level of assurance regarding the identity of a PIV cardholder. OMB’s 2011 guidance required agencies to step up their efforts to use the electronic capabilities of PIV credentials as the common means of authentication for access to agency facilities. We reported in February 2008 that agencies generally had not been using the cards’ electronic authentication capabilities for physical access. Agencies have made fair progress in using the electronic capabilities of the PIV cards for physical access to their facilities. For example, two of the eight agencies we reviewed (NASA and NRC) reported using the electronic capabilities of the PIV cards for physical access to both their headquarters and field office facilities. Specifically, NRC was using electronic verification of the PIV card’s CHUID combined with visual authentication by a guard as the predominant electronic authentication method at its facilities. NASA officials reported that their agency was using electronic CHUID verification combined with visual authentication as the predominant access control method at its headquarters facility and for access to buildings within major field locations. Four agencies (DHS, HUD, Interior, and Labor) reported that while they had begun utilizing the electronic capabilities of the PIV card at their headquarters, they had not yet begun using them at all of their major field office facilities. According to DHS officials, the agency has conducted an assessment of all its facilities in the National Capitol region to determine what method of authentication was being used for physical access and to develop a strategy to implement PIV-based electronic authentication at each facility. DHS officials stated that approximately 70 percent of these facilities utilize the electronic capabilities of the PIV card for physical access. The same officials stated that they plan to complete a similar assessment of DHS facilities outside of the National Capital region by the fourth quarter of fiscal year 2011. Additionally, DHS officials stated that a new departmentwide implementation strategy will be completed by the second quarter of fiscal year 2012. HUD officials stated that their previous strategy had been to install PIV- related upgrades to physical access control systems in conjunction with other scheduled renovations at each of their field offices. As of March 2011, HUD officials stated that 13 of its 83 field offices had upgraded physical security systems. In December 2008, HUD submitted a plan to OMB establishing fiscal year 2013 as the completion date for the upgrades to the majority of its field offices and fiscal year 2015 for its smallest field offices. According to a HUD official, they are currently planning to issue PIV credentials to all field offices by the end of fiscal year 2014, pending availability of funds. Interior officials stated that they were using the electronic capabilities of the PIV card at several, but not all, of their major field offices. According to Interior officials, in response to OMB’s guidance to step up efforts to use the PIV credentials for access to agency facilities, they established a new Identity, Credential, and Access Management Program Office and plan to convene a working group of representatives from each departmental bureau to develop plans for modernizing the physical access control infrastructure. No time frame has been established for completing these plans. Labor officials stated that they were using the electronic capabilities of the PIV card at 2 of their 10 regional field offices and were assessing the remaining offices to determine whether upgrades to the physical security systems were needed to enable PIV-based electronic authentication. The assessment is expected to be completed by the end of fiscal year 2012, after which necessary upgrades are to be implemented based on priority and the availability of funding. The remaining two agencies (Commerce and USDA) were not using PIV- based electronic authentication at their headquarters facilities or the majority of their other major facilities. A Commerce official stated that major upgrades were still needed to physical access control systems throughout the department to support HSPD-12 requirements, including replacing card readers and upgrading software. Previously the department had focused on card issuance and had not developed plans for card usage. In September 2010, a contractor completed an assessment of the status of physical access systems at the department’s major facilities to determine what steps were needed to develop a departmentwide HSPD-12-compliant system, but specific implementation plans for such a system have not yet been developed. Regarding PIV-enabled access to their headquarters buildings, USDA officials stated that the department was in the process of purchasing card- reader-equipped turnstiles, but that they were unsure when they would be installed because funding had not been obtained. In addition, officials stated that 130 of the department’s 250 major field facilities had begun using PIV credentials for access control through the departmentwide physical security system. For the remaining locations, USDA’s component agencies had not yet committed to replacing their hardware and integrating their software with the departmentwide system. USDA officials stated that use of PIV cards for physical access previously had been considered a low priority within the agency, and, as a result, progress had been slow. HSPD-12 requires agencies to use PIV credentials for access to federal information systems. FIPS 201 identifies different methods of electronic authentication that are available via PIV cards for logical access and the respective assurance levels associated with each method. OMB’s 2011 guidance required agencies to step up their efforts to use the electronic capabilities of PIV credentials as the common means of authentication for access to agency information systems. We reported in February 2008 that select agencies had generally not been using the cards for logical access. Since then, agencies have made limited progress in utilizing the electronic capabilities of the PIV credential for access to systems. Five of the agencies we reviewed (NASA, HUD, Interior, NRC, and USDA) had taken steps to acquire and deploy hardware and software allowing substantial numbers of users to access agency systems via PIV-based authentication, but none of them had fully implemented the capability or were requiring use of PIV cards as a primary means of authenticating users. For example, NASA officials reported that 83 percent of the agency’s Windows desktops were equipped with PIV card readers and that the agency’s network and 622 separate software applications had all been configured for authentication using PIV cards. Nevertheless, users still could log on to NASA systems using a combination of username and password. Agency officials estimated that only 10 percent of users were using PIV cards for authentication. According to NASA officials, users reported in a survey that they did not see the benefits of using the PIV card to access the agency network because they still had to maintain their network password to access other software applications or to access the network from another device. NASA officials stated that they were planning to upgrade additional applications to exclusively use PIV cards for logical access, but they did not have time frames for the completion of this activity. A HUD official stated that the department had enabled the electronic capabilities of the PIV card for access to its network, but nevertheless, users still could log onto the HUD network using a combination of username and password. According to the same official, HUD had deployed card readers on most of its agency computers to enable use of PIV cards for access to the network. An official stated that HUD is currently developing a strategy that will define milestones for departmentwide implementation of PIV-enabled logical access and identify the necessary technology to make full use of the PIV card for logical access. A HUD official stated that HUD had not established a date for full implementation of the electronic capabilities of the PIV card for logical access. According to an department official, Interior does not currently utilize PIV cards to access the department’s network within departmental offices but has begun utilizing the capability for remote access. An official reported that approximately 17,000 users require remote access to Interior systems on a regular basis. At the time of our review, between 8,000 and 9,000 of these users had been issued laptop computers that were configured to use PIV cards for authentication. Interior officials estimated that approximately 3,000 of those individuals were actually using PIV- based authentication on a regular basis. The Office of the Chief Information Officer issued a policy mandating the use of the PIV card for all remote access to the department’s network by December 2010, but that goal had not yet been reached. Officials reported they were beginning to plan for the implementation of PIV-enabled local access to the department’s network from workstations within its offices but had not yet set a milestone for completing that activity. NRC officials stated that they had acquired hardware and software to enable PIV-based logical access for all of their employees and planned to have them deployed to all workstations by the end of 2011. The agency had a small pilot of approximately 50 employees from headquarters and five regional offices under way to test PIV-based authentication to the agency’s network. The pilot was scheduled to be completed in the fourth quarter of fiscal year 2011, and the agency planned to achieve full implementation of PIV-based logical access by December 31, 2011. A department official stated that USDA had PIV-enabled all of its user hardware (both laptop and desktop systems) as well as 423 web-based software applications, including remote access to agency systems. This same official believed that some of USDA’s 90,000 users were using their PIV cards to access agency systems and applications, but they did not have an estimate of the number. USDA also had not established a target date for requiring use of the PIV card for access to agency systems and applications. The other three agencies (Commerce, DHS, and Labor) had made less progress. While all were developing plans or had limited trial deployments under way, none of these agencies had deployed hardware and software that would enable PIV-based authentication to systems and networks for substantial numbers of their users. According to a department official, Commerce was not using PIV cards for access to its systems. The department formed a working group with representatives from each component to investigate logical access solutions for the department. According to officials, one component, NIST, has enabled approximately 150 workstations to accept PIV cards for logical access, but NIST users were not regularly using the capability. Commerce’s identity management plan indicates that it intends to achieve full internal implementation of PIV-based logical access in fiscal year 2013. DHS officials stated they began planning in May 2011 for PIV-based systems access across the department in response to OMB’s February 2011 guidance. They added that the initial planning effort is expected to be completed in the fourth quarter of fiscal year 2012. At the time of our review, a pilot project was under way at DHS headquarters whereby approximately 1,000 employees were using PIV cards to access the agency’s network. DHS officials said they planned to expand this pilot project to all DHS headquarters offices by the end of the first quarter of fiscal year 2012. According to officials, the department is developing plans to require headquarters personnel to use PIV cards for access to the department’s network but has not established a completion date. Labor officials stated they were conducting a pilot in the Office of the Assistant Secretary for Administration and Management to test the use of PIV cards to access the agency’s network. According to these officials, Labor plans to enable PIV-based network access for a larger population of users beginning in fiscal year 2012; however, it may need to purchase replacement hardware and software to achieve this goal. Interoperability refers to the ability of two or more systems or components to exchange information and use the information exchanged. The FIPS 201 standard and related NIST guidance established specifications to ensure that PIV cards and systems developed by different vendors would be interoperable from a technical standpoint. NIST and GSA also established testing programs to ensure that PIV products and services conformed to these standards. These efforts have helped to ensure that card readers and associated software systems are able to read and process the data on any PIV card, including cards produced by different vendors for other federal agencies. In addition, Federal Identity, Credential, and Access Management implementation guidance issued by the federal CIO Council provides examples that illustrate how agencies could implement procedures to accept and electronically validate PIV credentials from other agencies. Moreover, OMB guidance requires agencies to take steps to establish processes and procedures for accepting and validating PIV cards issued by other agencies and ensure that agencies’ systems are capable of validating cards electronically. Several of the agencies we reviewed have taken steps to accept PIV cards issued by other agencies in limited circumstances. For example, officials from Interior and USDA stated they were working together to develop policies and procedures for enrolling PIV credentials from both agencies in their existing physical and logical access systems at key sites, such as the National Interagency Fire Center, which is staffed by employees of Interior and USDA’s Forest Service. According to a USDA official, the PIV cards of Interior employees can be manually enrolled in USDA’s physical access control system; however, when those employees stop working at USDA sites, their card registration information must be manually deleted from the USDA system. Similarly, according to a DHS official, the Federal Emergency Management Agency (FEMA) has developed procedures for manually enrolling the PIV credentials of other federal officials who need access to certain FEMA-controlled facilities, such as the National Emergency Center. These examples demonstrate the feasibility of establishing PIV card interoperability among agencies but also show the limitations of implementing “manual” processes that do not include electronic validation of credentials. Specifically, each of these cases is limited in scope and requires officials to take extra steps to ensure the validity of cards issued by other agencies. Only one of the agencies we reviewed had plans to establish a system capable of universally reading and electronically validating PIV cards issued by all other federal agencies. Specifically, NASA officials stated they were developing a formal credential registration process that would enable them to enroll the PIV credentials of external federal personnel seeking access to NASA facilities and information systems into the agency’s centralized identity management system. NASA officials estimated this project would be completed by the end of fiscal year 2011. Agencies reported that their mixed progress in issuing PIV credentials and using them for electronic authentication of individuals accessing federal facilities and information systems can be attributed to several major management and technical obstacles. These include logistical difficulties associated with issuing PIV cards to personnel in remote field locations, as well as tracking and then revoking cards issued to contractor personnel, the lack of priority attention and adequate resources being focused on implementing PIV-enabled physical access at all major facilities, the absence of a full suite of procedures for requiring the use of PIV cards for logical access, and the lack of procedures and assurances for interoperability among federal agencies. OMB’s August 2005 guidance specifies that HSPD-12 credentials are to be issued to all employees and contractor personnel in executive branch agencies who require long-term access to federally controlled facilities or information systems. The guidance instructed agencies to make risk- based decisions on whether to issue PIV cards to specific types of individuals, such as short-term employees (less than 6 months on the job), guest researchers, volunteers, and intermittent or temporary employees. All employees and contractor personnel requiring long-term access to federal facilities and systems, regardless of physical location, were instructed to be issued PIV cards. Officials from four agencies (DHS, Interior, Labor, and USDA) stated that challenges in providing PIV cards to personnel in remote field office locations had hindered their ability to complete PIV-card issuance requirements set forth by OMB and in the FIPS 201 standard. These agencies all have large numbers of employees and contractor staff in field office locations, some of which are remote and difficult to access. The PIV-card issuance process requires at least one visit to an office equipped with a credentialing station, so that fingerprints can be taken and individuals can be enrolled in the agency’s identity management system. Credentialing stations were originally deployed to few field locations, thus requiring staff at remote locations to make potentially expensive and time-consuming trips to obtain PIV cards. DHS, Interior, and Labor officials indicated that the limited number of credentialing centers and the travel costs to access those centers made it logistically difficult to meet card issuance targets. While these logistical issues have caused challenges in issuing cards to remote field staff, actions can be taken to minimize the expense and disruption of issuing cards to these individuals. Officials from Interior, Labor, and USDA stated they had used “mobile” PIV credentialing stations provided by GSA’s Managed Services Office or other GSA-approved solutions to issue PIV cards to field staff. According to a USDA official, these inexpensive, portable stations, part of GSA’s USAccess Program, offer enhanced flexibility to enroll employees and activate PIV cards at field locations. In addition to logistical concerns, USDA officials stated they faced challenges in determining whether staff in the “other” category— specifically seasonal and temporary employees, such as firefighters and summer volunteers—should receive credentials and what processes should be established for handling them. According to these officials, the department’s tally of “other” staff receiving PIV credentials was low in part due to this challenge. However, these staff are not necessarily required to obtain PIV credentials. OMB guidance instructed agencies to make risk- based determinations on whether to issue PIV cards to staff in the “other” category. Once a determination is made not to issue PIV cards to a specific group, those individuals are not included in the total population needing cards and thus should not be a factor in calculating an agency’s progress in card issuance. Until agencies take steps to address logistical challenges associated with card issuance and make risk-based determinations about how to handle “other” staff, they are likely to continue to be unable to reach HSPD-12’s objectives of issuing PIV cards to all personnel requiring access to federal facilities and systems. Contractor and temporary staff may be responsible for carrying out a wide range of mission-critical tasks requiring access to agency facilities and information systems. The FIPS 201 standard requires agencies to implement an identity management system with the ability to track the status of PIV credentials throughout their lifecycle, including activation and issuance, as well as suspension, revocation, and destruction. Additionally, the standard requires that, upon the issuance of credentials, agencies keep track of all active, lost, stolen, and expired cards. To do so, agencies must establish a card registry to document and monitor all cards issued to employees and contractor staff. Officials from three agencies (Commerce, DHS, and HUD) identified difficulties they faced in monitoring and tracking contractor personnel, especially when contracts begin and end, as a reason for not fully complying with HSPD-12 requirements for background investigations and/or PIV card issuance and revocation. According to agency officials, the inability to track when contractor personnel leave prevents them from ensuring that all PIV credentials are returned upon termination of a contract. Commerce officials stated they had initiated a project to develop and deploy a system to improve tracking of PIV card issuance to contractor personnel. The system is being designed to automatically trigger revocation of PIV credentials as part of the exit process for departing contractor personnel. However, Commerce officials did not provide an estimated date for implementation of the new system. DHS officials stated they had experienced problems tracking contractor personnel and documenting when their credentials were scheduled to be revoked. Officials stated it was difficult to monitor contractor projects, which may often be extended, and ensure that their systems were updated to reflect these changes. The officials stated that they had developed revisions to their existing procedures to better ensure that PIV cards issued to contractor personnel are revoked, returned to the agency, and accounted for. However, they did not provide an estimated date for implementation of the revised procedures. HUD officials stated that although they had issued cards to all of their contractor personnel, they had deferred addressing issues with monitoring the status of contractor PIV cards. They stated that control procedures had not been put into place to ensure that PIV cards were promptly revoked for departing contractor staff, and officials acknowledged that some contractor staff had left the agency without returning PIV cards issued to them. HUD officials did not know how often this had occurred. According to these officials, the problem could be addressed by including all contractor staff in the identity management system HUD uses for PIV cards issued to employees and by establishing controls to ensure that cards are returned upon departure of all staff. However, they did not provide an estimated date for implementing these changes. At the time of our review, Commerce, DHS, and HUD had not set time frames for implementing planned improvements. Until they develop and implement procedures for effectively controlling the issuance of PIV cards to contractor personnel and revoking expired contractor cards, these agencies could be at risk that unauthorized individuals could access their facilities and information systems if other compensating controls are not in place. HSPD-12 required the use of the PIV credential for access to federal facilities. OMB’s 2005 guidance instructed agencies to make risk-based determinations about the type of authentication mechanisms to utilize at their facilities and specified “minimal reliance” on visual authentication as a sole means of authenticating PIV credentials. OMB’s February 2011 guidance required agencies to increase usage of the electronic capabilities of PIV credentials as the common means of authentication for access to agency facilities. Officials from six agencies (Commerce, DHS, HUD, Interior, Labor, and USDA) indicated that implementing PIV-enabled physical access had not been a priority at their agencies and that resources had not been committed to fully implementing the electronic capabilities of the PIV-card at all of their facilities as required by HSPD-12. Even though 6 years have passed since OMB first issued guidance on implementation of HSPD-12, Commerce, DHS, and Interior have not yet developed specific plans for fully implementing PIV-enabled physical access throughout their departments. At Commerce, a contractor-led study of the existing physical access control systems at major facilities and the infrastructure needed to develop a departmentwide HSPD-12- compliant system was completed in September 2010. However, Commerce has not yet developed a plan for implementing such a system within the department. DHS officials stated that they still had not yet determined what physical access systems were in place throughout their agencies and what investment would be needed to upgrade or replace the systems to achieve a departmentwide HSPD-12-compliant system. According to a 2010 report by the DHS Office of Inspector General, the department had not made the implementation of an effective HSPD-12 program a priority and did not have a plan for enhancing the department’s physical access controls. DHS officials stated that they had recently formed a working group dedicated to physical access. The group had begun determining what systems were in place throughout the department and planned to report quarterly on its progress to OMB. Although Interior issued an official policy in 2009 requiring use of PIV credentials for physical access, the department does not have a plan in place to implement the policy. Interior officials stated that they plan to convene a working group of representatives from each departmental bureau to develop plans for modernizing their physical access control infrastructure. The other three agencies—HUD, Labor, and USDA—had developed plans for PIV-enabled physical access but had not obtained funds to pay for implementation or had delayed implementation to reduce investment costs. Officials from HUD, for example, had planned to not implement PIV-enabled access at field locations until each location was scheduled for renovations, to reduce costs. The agency planned to re-examine that strategy based on OMB’s February 2011 guidance. Labor officials stated that they previously had been planning to enable PIV-based access at their field locations in fiscal year 2012 but were planning to develop revised milestones for those implementations due to budget constraints. Officials at USDA stated that they were in the process of purchasing equipment for PIV-enabled physical access. Use of PIV credentials for physical access is unlikely to progress at these six agencies until greater priority is placed on implementation of PIV- based physical access control systems. Until Commerce, DHS, and Interior develop specific implementation plans for their major facilities, including identifying necessary infrastructure upgrades and time frames for deployment, they are unlikely to reach HSPD-12’s objective of using of the PIV credential to enhance control over access to federal facilities. HUD, Labor, and USDA are also unlikely to reach that objective until they place greater priority on funding PIV-enabled physical access at their major facilities. HSPD-12 requires agencies to use PIV credentials for access to federal information systems to the maximum extent practicable. OMB’s 2005 guidance required agencies to prioritize implementation based on authentication risk assessments required by previous OMB and NIST guidance. Additionally, OMB’s February 2011 guidance required agencies to step up their efforts to use the electronic capabilities of PIV credentials as the common means of authentication for access to agency information systems. Officials from four agencies (HUD, NRC, NASA, and USDA) reported that various technical issues hindered using PIV cards as the primary means of access to agency networks and systems. One technical issue that agencies reported was not having backup procedures to authenticate employees who did not possess a PIV card. Officials from HUD, NASA, and USDA stated that, although they had deployed software and hardware to enable PIV-based access to systems and networks, they were not using the cards as the primary means of authentication to agency systems because they had not established backup procedures to authenticate employees who did not possess a PIV card. According to these officials, the issue of how to accommodate personnel without PIV cards was a major obstacle to requiring the use of PIV cards for access to networks and systems. There are several reasons why staff might not have a PIV card when trying to access agency systems. Individuals could have left the card at another location or lost the card. The card may have been damaged and made inoperable. Also, some staff may not have any cards issued to them. Short-term employees (less than 6 months on the job), guest researchers, volunteers, and intermittent or temporary employees, for example, may not be required to have PIV cards but may still need access to agency networks and systems. Agency officials reported that they were working on solutions to this problem. Officials at HUD and USDA, for example, stated that they were working on developing standard procedures to address these circumstances. NASA officials stated they were participating in a governmentwide team tasked with drafting guidance for issuing smart cards to people who do not qualify for PIV cards but need access to agency facilities and systems. Until HUD, NASA, and USDA develop and implement procedures for providing temporary logical access to their systems as a backup mechanism, they are unlikely to reach HSPD-12’s objective of using of the PIV credential to enhance control over access to federal systems. Other technical issues reported by agency officials included adapting to the requirement that workstations be locked when PIV cards are removed and using hardware that was not compatible with PIV cards. Specifically, NRC and USDA officials stated that governmentwide security policies requiring workstations to be locked when removing the PIV card makes using the PIV card for logical access in a laboratory setting difficult because employees routinely need access to multiple computers at the same time. If they were required to use the PIV card for logical access, they would be unable to remain logged in to multiple computers. Additionally, NASA officials stated that many of its employees utilize Apple Mac workstations or mobile devices to carry out their work responsibilities. The same officials noted that the PIV card is incompatible with these devices; therefore, employees must continue to use their username and password for access to the NASA network when using these devices. Officials from the other four agencies (Commerce, DHS, and Interior, and Labor) indicated that implementing PIV-enabled logical access had not been a priority at their agencies and that resources had not been committed to fully implementing the electronic capabilities of the PIV-card for access to their networks and systems. Commerce, DHS, Interior, and Labor officials, for example, stated that their agencies had not yet determined what logical access systems were currently in place throughout their agencies and what investment would be needed to upgrade or replace them to achieve a departmentwide HSPD-12- compliant system. They also stated that funding constraints had hindered implementing PIV-based logical access in a timelier manner. Commerce, DHS, Interior, and Labor are unlikely to fulfill the objectives of the HSPD- 12 program until greater management priority is placed on implementation of PIV-based logical access control systems. One of the primary goals of the HSPD-12 program is to enable interoperability across federal agencies. As we have previously reported, prior to HSPD-12, there were wide variations in the quality and security of ID cards used to gain access to federal facilities. To overcome this limitation, HSPD-12 directed ID cards to have standard features and means for authentication. Further, guidance from OMB required agencies to have access control processes that accept and electronically verify PIV credentials issued by other federal agencies. Nevertheless, agencies have made minimal progress in implementing access control systems that can accept and validate PIV cards issued by other agencies. Several of the agencies we reviewed, including Commerce, HUD, and Labor, had not devoted resources or management attention to achieving cross-agency interoperability, according to agency officials. This limited progress reflects, in part, the low priority OMB initially put on achieving cross-agency interoperability. OMB guidance initially focused on card issuance and set performance measures keyed exclusively to progress in that area. According to an OMB official, specific interoperability requirements were not established until November 2009, when the office directed agencies to develop detailed policies for aligning their identity, credential, and access management activities with the Federal Identity, Credential and Access Management Roadmap and Implementation Guidance. As part of their policies, agencies were required to enable relevant applications to accept PIV cards from other executive branch agencies for authentication. In addition to a lack of systems and processes in place at agencies to electronically validate PIV cards issued by other agencies, there are also no processes in place to ensure that credentials issued by agencies are trustworthy and should be accepted by other agencies as a basis for granting access to their facilities and systems. Processes have not been developed to establish trustworthiness by validating the certification processes at agencies. HSPD-12 guidance allows agencies to independently develop FIPS 201-compliant credentialing systems, and NIST issued guidance in 2005 for certifying and accrediting organizations that issue PIV credentials. However, according to GSA officials, the approach envisioned in the NIST guidance, which relies on self- certification, has not been adequate to establish trust. The primary reason self-certification has not worked is that it does not include a provision for independent validation, such as through the use of third-party audits. OMB officials agreed that a third-party validation process would be useful in establishing trust. Until such a process is in place, agencies may be reluctant to authorize access to their facilities and systems based on PIV credentials issued by other agencies. Until agencies develop implementation plans for accepting and electronically verifying external agency credentials and a process is established to provide assurance that external PIV credentials are trustworthy, progress in achieving HSPD-12’s goal of governmentwide interoperability for PIV credentials will likely remain limited. Agencies have made substantial progress in issuing PIV cards to employees and contractor personnel and have begun using the electronic capabilities of the cards for physical and logical access but have made less progress in using the credentials for access to federal facilities and information systems. They face a variety of obstacles in fully issuing the credentials and making better use of their electronic capabilities. For example, several have experienced difficulties in issuing credentials to remote and “other” staff and in ensuring that expired credentials are promptly revoked. Six agencies were not using the electronic capabilities of the credentials for access to all of their major facilities because doing so was not a priority in terms of management commitment and resources. None of the eight agencies had fully implemented logical access to networks and systems using PIV credentials, half because of technical challenges and half because it was not a priority to do so. Delaying implementation of HSPD-12 means that the benefits of enhanced security that HSPD-12 is designed to provide are also being delayed. Without taking steps to resolve technical problems and setting a higher priority on implementation, agencies are not likely to make substantially better progress in addressing these obstacles. Establishing interoperability among agencies has also been a challenge. Agencies have established policies and procedures for accepting credentials from other agencies only in limited circumstances, in part because OMB only began requiring that agency systems accept credentials from other agencies in 2009. Interoperability among agencies has also been hindered by the lack of third-party audit mechanisms to establish the trustworthiness of agency implementations of HSPD-12. Until such mechanisms are in place, agencies are likely to continue to make slow progress in achieving interoperability. To address challenges in conducting background investigations, issuing PIV cards, and using the cards for physical and logical access, we are making 23 recommendations to the eight departments and agencies we reviewed in our report to help ensure they are meeting the HSPD-12 program’s objectives. Appendix IV contains these recommendations. To address the challenge of promoting the interoperability of PIV cards across agencies by ensuring that agency HSPD-12 systems are trustworthy, we recommend that the Director of OMB require the establishment of a certification process, such as through audits by third parties, for validating agency implementations of PIV credentialing systems. We sent draft copies of this report to the eight agencies covered by our review, as well as to OMB and GSA. We received written responses from Commerce, DHS, HUD, Interior, Labor, NASA, and NRC. These comments are reprinted in appendices V through XI. We received comments via e-mail from OMB, USDA, and GSA. Of the nine agencies to which we made recommendations, six (Commerce, DHS, Interior, Labor, NASA, and NRC) concurred with our recommendations. In cases where these agencies also provided technical comments, we have addressed them in the final report as appropriate. DHS, Interior, Labor, and NASA also provided information regarding specific actions they have taken or plan on taking that address portions of our recommendations. Further, DHS, Labor, and NASA provided estimated timelines for completion of actions that would address our recommendations. HUD’s Acting Chief Human Capital Officer did not state whether the department concurred with our recommendations. However, she provided information about actions the department is taking to address each of them. For example, she provided updated information on HUD’s schedule for implementing PIV-based physical access control at its field locations and for requiring staff to use their PIV cards to gain access to agency systems. We have updated the final report with this information as appropriate. The two remaining agencies (OMB and USDA) did not comment on the recommendations addressed to them. However, OMB and USDA provided technical comments on the draft report, which were addressed in the final report as appropriate. We also received technical comments via e-mail from GSA. These comments have also been incorporated into the final report as appropriate. We are sending copies of this report to other interested congressional committees; the Secretaries of the Departments of Agriculture, Commerce, Homeland Security, Housing and Urban Development, the Interior, and Labor; the Administrators of the General Services Administration and National Aeronautics and Space Administration; the Chairman of the Nuclear Regulatory Commission; and the Director of the Office of Management and Budget. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6244 or at wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of the report. Key contributors to the report are listed in appendix XII. Our objectives were to (1) determine the progress that selected agencies have made in implementing the requirements of Homeland Security Presidential Directive 12 (HSPD-12) and (2) identify obstacles agencies face in implementing the requirements of HSPD-12. We conducted our audit work at the same eight agencies we reviewed for our last report. They were the Departments of Agriculture, Commerce, the Interior, Homeland Security (DHS), Housing and Urban Development (HUD), and Labor; the National Aeronautics and Space Administration (NASA); and the Nuclear Regulatory Commission (NRC). These agencies were chosen in 2008 based on the fact that they were each in different stages of implementing smart card programs and were using different strategies for implementing HSPD-12. Our selection included agencies that were acquiring personal identity verification (PIV) card systems through the General Services Administration’s (GSA) Managed Services Office as well as agencies that were acquiring PIV card systems independently. To address our first objective, we reviewed HSPD-12, Federal Information Processing Standards (FIPS) 201, related National Institute of Standards and Technology (NIST) special publications, and guidance from the Office of Management and Budget (OMB) to determine what progress agencies should be making in completing background checks, issuing PIV cards, using PIV cards for physical and logical access, and achieving interoperability with other federal agencies. We analyzed agencies’ quarterly status reports to determine the actual progress they had made in each of these areas and compared it with governmentwide guidance, as well as the results from our 2008 report. In order to assess the reliability of the data collected from the eight agencies’ quarterly status reports specific to background investigations and PIV card issuance, we submitted questions to the agencies and reviewed agency documentation. In some cases, as we noted where applicable, the data included in the reports were based on the agencies’ best estimates. We determined the data were sufficiently reliable for determining overall agency progress in the areas of background investigations and PIV card issuance. To assess progress in the use of PIV credentials for physical and logical access, we reviewed agency documentation such as HSPD- 12 implementation plans and policies and discussed progress with agency officials. Additionally, we reviewed previous GAO and agency inspector general reports. To address our second objective, we interviewed officials from the selected agencies to obtain information on obstacles they faced in implementing HSPD-12 requirements, including difficulties in completing background checks, issuing PIV cards, using PIV cards for physical and logical access, and achieving interoperability with other federal agencies. We analyzed the obstacles that were identified to determine whether they were consistent across the agencies in our sample and whether they had been raised or addressed in our previous reviews. We also assessed OMB, GSA, NIST, and federal Chief Information Officers (CIO) Council documentation to determine the extent to which these obstacles could be addressed within the framework of existing guidance. Finally, we interviewed program officials from OMB and GSA who had been involved in supporting implementation of HSPD-12 across the government to discuss actions they had taken to assist agencies in implementing HSPD- 12 and to validate the implementation obstacles reported by agency officials. We conducted this performance audit at Commerce, DHS, GSA, HUD, Interior, Labor, NASA, NRC, OMB, and USDA in the Washington, D.C., area from October 2010 to September 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The requirements of PIV-II include the following: specifications for the components of the PIV system that employees and contractor personnel will interact with, such as PIV cards, card and biometric readers, and personal identification number (PIN) input devices; security specifications for the card issuance and management provisions;  a suite of authentication mechanisms supported by the PIV card and requirements for a set of graduated levels of identity assurances; specifications for the physical characteristics of PIV cards, including requirements for both contact and contactless interfaces and the ability to pass certain durability tests; and  mandatory information that is to appear on the front and back of the cards, such as a photograph, cardholder name, card serial number, and issuer identification. There are many components of a PIV-II system, including the following:  enrollment stations—used by the issuing agency to obtain the applicant’s information, including digital images of fingerprints and a digital photograph.  an ID management system—stores and manages cardholder information, including the status of assigned credentials. card issuance stations—issue PIV cards to applicants. Prior to releasing a PIV card to the applicant, the issuer first matches the applicant’s fingerprint to the fingerprint on the PIV card. Once a match has been verified, the applicant is issued the card. card management system—manages life-cycle maintenance tasks associated with the credentials, such as “unlocking” the PIV cards during issuance or updating a PIN number or digital certificate on the card.  a physical access control system—permits or denies a user access to a building or room. This system may use a variety of authentication mechanisms, ranging from visual inspection by a guard to fingerprint scanning. Once the user has been authenticated and access has been authorized, the physical access control system grants entry to the user. logical access control system—permits or denies a user access to information and systems. This system may employ a variety of authentication methods, such as requiring users to enter a password or perform a fingerprint scan.  public key infrastructure (PKI)—allows for electronic verification of the status of the digital certificates contained on the PIV card. The status of the PIV card—whether it is valid, revoked, or expired—is verified by the card management system. NIST has issued several special publications (SP) providing supplemental guidance on various aspects of the FIPS 201 standard. Selected special publications are summarized in this appendix. SP 800-73-3 is a companion document to FIPS 201 that specifies the technical aspects of retrieving and using the identity credentials stored in a PIV card’s memory. This publication is divided into four parts and specifies detailed requirements for the interface between a smart card and other PIV systems. The publication aims to promote interoperability among PIV systems across the federal government by constraining vendors’ interpretation of FIPS 201. SP 800-76-1 outlines technical acquisition and formatting specifications for the biometric credentials of the PIV system, including the PIV card. SP 800-78-3 outlines the cryptographic mechanism and objects that employ cryptography as specified in FIPS 201. This publication also describes the cryptographic requirements for keys and authentication information stored on the PIV card, status information generated by PKI Certification Authorities, and management of information stored on the PIV card. This publication also identifies PIV card infrastructure components that support issuance and management. SP 800-79-1 describes the guidelines that are to be used by federal departments and agencies to accredit the capability and reliability of PIV card issuers they use to perform PIV card services, such as identity proofing, applicant registration, and card issuance. The new guidelines are based on emergent service models (in-house, leased, shared, etc.), lessons learned in past accreditations, and the directives in OMB memorandums. The publication also describes an assessment model that includes conformance testing, certification, and accreditation. This document provides examples of PIV organization management structures, an objective set of controls for PIV card issuers, an assessment and accreditation methodology that assesses the capability and reliability of a PIV card issuer based on these controls, and sample accreditation decision letters. SP 800-85A-2 outlines a suite of tests to validate a software developer’s PIV middleware and card applications to determine whether they conform to the requirements specified in SP 800-73-3. This publication also includes detailed test assertions that provide the procedures to guide the tester in executing and managing the tests. This document is intended to allow (1) software developers to develop PIV middleware and card applications that can be tested against the interface requirements specified in SP 800-73-3; (2) software developers to develop tests that they can perform internally for their PIV middleware and card applications during the development phase; and (3) certified and accredited test laboratories to develop tests that include the test suites specified in this document and that can be used to test the PIV middleware and card applications for conformance to SP 800-73-3. SP 800-85B outlines a suite of tests to validate a developer’s PIV data elements and components to determine whether they conform to the requirements specified in SP 800-73, SP 800-76, and SP 800-78. This publication also includes detailed test assertions that provide the procedures to guide the tester in executing and managing the tests. This document is intended to allow (1) developers of PIV components to develop modules that can be tested against the requirements specified in SP 800-73-1, SP 800-76, and SP 800-78; (2) developers of PIV components to develop tests that they can perform internally for their PIV components during the development phase; and (3) accredited test laboratories to develop tests that include the test suites specified in this document and that can be used to test the PIV components for conformance to SP 800-73-1, SP 800-76, and SP 800-78. SP 800-87 Revision 1 - 2008 provides the organizational codes necessary to establish the Federal Agency Smart Credential Number that is required to be included in the FIPS 201 Card Holder Unique ID (CHUID). SP 800-87 is a companion document to FIPS 201. Appendix A lists the updated agency codes for the identification of federal and federally assisted organizations to be used in the PIV CHUID. SP 800-96 provides requirements for PIV card readers in the area of performance and communications characteristics to foster interoperability. It also outlines requirements for the contact and contactless card readers for both physical and logical access control systems. SP 800-104 provides additional information on the PIV card color-coding for designating employee affiliation. The recommendations in this document complement FIPS 201 in order to increase reliability when visual verification of PIV cards is implemented. SP 800-116 provides best practice guidelines for integrating the PIV card with the physical access control systems (PACS) that authenticate the cardholders at federal facilities. Specifically, this publication discusses various PIV card capabilities, so that risk-based assessments can be made and appropriate PIV authentication mechanisms selected to manage physical access to federal government facilities. This document also proposes a PIV implementation maturity model to measure the progress of agencies’ PIV implementations and recommends an overall strategy for agency implementation of PIV authentication mechanisms within PACS systems. To ensure that PIV credentials are issued only to employees and contractor staff requiring them, we recommend that the Secretary of Agriculture take steps to identify which staff in the “other” category should receive PIV cards and establish procedures for handling such cases. To meet the HSPD-12 program’s objectives of using the electronic capabilities of PIV cards for access to federal facilities, networks, and systems, we recommend that the Secretary of Agriculture take the following three actions:  Ensure that the department’s plans for PIV-enabled physical access at major facilities are implemented in a timely manner.  Require staff with PIV cards to use them to access systems and networks and develop and implement procedures for providing temporary access to staff who do not have PIV cards.  Develop and implement procedures to allow employees who need to access multiple computers simultaneously to use the PIV card to access each computer. To ensure that PIV cards do not remain in the possession of staff whose employment or contract with the federal government is over, we recommend that the Secretary of Commerce establish controls, in addition to time frames for implementing a new tracking system, to ensure that PIV cards are revoked in a timely fashion. To meet the HSPD-12 program’s objectives of using the electronic capabilities of PIV cards for access to federal facilities, networks, and systems, we recommend that the Secretary of Commerce take the following two actions:  Develop specific implementation plans for enabling PIV-based access to the department’s major facilities, including time frames for deployment.  Ensure that plans for PIV-enabled logical access to the department’s systems and networks are implemented in a timely manner. To ensure that PIV credentials are issued to all employees and contractor staff requiring them, we recommend that the Secretary of Homeland Security make use of portable credentialing systems, such as mobile activation stations, to economically issue PIV credentials to staff in remote locations. To ensure that PIV cards do not remain in the possession of staff whose employment or contract with the federal government is over, we recommend that the Secretary of Homeland Security establish specific time frames for implementing planned revisions to the department’s tracking procedures, to ensure that PIV cards are revoked in a timely fashion. To meet the HSPD-12 program’s objectives of using the electronic capabilities of PIV cards for access to federal facilities, networks, and systems, we recommend that the Secretary of Homeland Security take the following two actions:  Develop specific implementation plans for enabling PIV-based access to the department’s major facilities, including identifying necessary infrastructure upgrades and timeframes for deployment.  Ensure that plans for PIV-enabled logical access to the department’s systems and networks are implemented in a timely manner. To ensure that PIV cards do not remain in the possession of staff whose employment or contract with the federal government is over, we recommend that the Secretary of Housing and Urban Development develop and implement control procedures to ensure that PIV cards are revoked in a timely fashion. To meet the HSPD-12 program’s objectives of using the electronic capabilities of PIV cards for access to federal facilities, networks, and systems, we recommend that the Secretary of Housing and Urban Development take the following two actions:  Ensure that the department’s plans for PIV-enabled physical access at major facilities are implemented in a timely manner.  Require staff with PIV cards to use them to access systems and networks and develop and implement procedures for providing temporary access to staff who do not have PIV cards. To ensure that PIV credentials are issued to all employees and contractor staff requiring them, we recommend that the Secretary of the Interior make greater use of portable credentialing systems, such as mobile activation stations, to economically issue PIV credentials to staff in remote locations. To meet the HSPD-12 program’s objectives of using the electronic capabilities of PIV cards for access to federal facilities, networks, and systems, we recommend that the Secretary of the Interior take the following two actions:  Develop specific implementation plans for enabling PIV-based access to the department’s major facilities, including identifying necessary infrastructure upgrades and time frames for deployment.  Ensure that plans for PIV-enabled logical access to Interior’s systems and networks are implemented in a timely manner. To ensure that PIV credentials are issued to all employees and contractor staff requiring them, we recommend that the Secretary of Labor make greater use of portable credentialing systems, such as mobile activation stations, to economically issue PIV credentials to staff in remote locations. To meet the HSPD-12 program’s objectives of using the electronic capabilities of PIV cards for access to federal facilities, networks, and systems, we recommend that the Secretary of Labor take the following two actions:  Ensure that the department’s plans for PIV-enabled physical access at major facilities are implemented in a timely manner.  Ensure that plans for PIV-enabled logical access to Labor’s systems and networks are implemented in a timely manner. To meet the HSPD-12 program’s objectives of using the electronic capabilities of PIV cards for access to federal networks and systems, we recommend that the Administrator of NASA take the following two actions:  Require staff with PIV cards to use them to access systems and networks and develop and implement procedures for providing temporary access to staff who do not have PIV cards.  Develop and implement procedures for PIV-based logical access when using Apple Mac and mobile devices that do not rely on direct interfaces with PIV cards, which may be impractical. To meet the HSPD-12 program’s objectives of using the electronic capabilities of PIV cards for access to federal networks and systems, we recommend that the Chairman of the NRC develop and implement procedures to allow staff who need to access multiple computers simultaneously to use the PIV card to access each computer. In addition to the contact named above, John de Ferrari, Assistant Director; Sher’rie Bacon; Marisol Cruz; Neil Doherty; Matthew Grote; Lee McCracken; Constantine Papanastasiou; David Plocher; and Maria Stattel made key contributions to this report.
To increase the security of federal facilities and information systems, the President issued Homeland Security Presidential Directive 12 (HSPD-12) in 2004. This directive ordered the establishment of a governmentwide standard for secure and reliable forms of ID for employees and contractors who access government-controlled facilities and information systems. The National Institute of Standards and Technology (NIST) defined requirements for such personal identity verification (PIV) credentials based on "smart cards"--plastic cards with integrated circuit chips to store and process data. The Office of Management and Budget (OMB) directed federal agencies to issue and use PIV credentials to control access to federal facilities and systems. GAO was asked to determine the progress that selected agencies have made in implementing the requirements of HSPD-12 and identify obstacles agencies face in implementing those requirements. To perform the work, GAO reviewed plans and other documentation and interviewed officials at the General Services Administration, OMB, and eight other agencies. Overall, OMB and federal agencies have made progress but have not fully implemented HSPD-12 requirements aimed at establishing a common identification standard for federal employees and contractors. OMB, the federal Chief Information Officers Council, and NIST have all taken steps to promote full implementation of HSPD-12. For example, in February 2011, OMB issued guidance emphasizing the importance of agencies using the electronic capabilities of PIV cards they issue to their employees, contractor personnel, and others who require access to federal facilities and information systems. The agencies in GAO's review--the Departments of Agriculture, Commerce, Homeland Security, Housing and Urban Development, the Interior, and Labor; the National Aeronautics and Space Administration; and the Nuclear Regulatory Commission--have made mixed progress in implementing HSPD-12 requirements. Specifically, they have made substantial progress in conducting background investigations on employees and others and in issuing PIV cards, fair progress in using the electronic capabilities of the cards for access to federal facilities, and limited progress in using the electronic capabilities of the cards for access to federal information systems. In addition, agencies have made minimal progress in accepting and electronically authenticating cards from other agencies. The mixed progress can be attributed to a number of obstacles agencies have faced in fully implementing HSPD-12 requirements. Specifically, several agencies reported logistical problems in issuing credentials to employees in remote locations, which can require costly and time-consuming travel. In addition, agencies have not always established effective mechanisms for tracking the issuance of credentials to federal contractor personnel--or for revoking those credentials and the access they provide when a contract ends. The mixed progress in using the electronic capabilities of PIV credentials for physical access to major facilities is a result, in part, of agencies not making it a priority to implement PIV-enabled physical access control systems at all of their major facilities. Similarly, a lack of prioritization has kept agencies from being able to require the use of PIV credentials to obtain access to federal computer systems (known as logical access), as has the lack of procedures for accommodating personnel who lack PIV credentials. According to agency officials, a lack of funding has also slowed the use of PIV credentials for both physical and logical access. Finally, the minimal progress in achieving interoperability among agencies is due in part to insufficient assurance that agencies can trust the credentials issued by other agencies. Without greater agency management commitment to achieving the objectives of HSPD-12, agencies are likely to continue to make mixed progress in using the full capabilities of the credentials. GAO is making recommendations to nine agencies, including OMB, to achieve greater implementation of PIV card capabilities. Seven of the nine agencies agreed with GAO's recommendations or discussed actions they were taking to address them; two agencies did not comment.
Before enactment of the Employee Retirement and Income Security Act (ERISA) of 1974, few rules governed the funding of defined benefit pension plans, and participants in these plans had no guarantees they would receive the benefits promised. When Studebaker’s pension plan failed in the 1960s, for example, many plan participants lost their pensions. Such experiences prompted the passage of ERISA to better protect the retirement savings of Americans covered by private pension plans. Along with other changes, ERISA established PBGC to pay the pension benefits of participants, subject to certain limits, in the event that an employer could not. ERISA also required PBGC to encourage the continuation and maintenance of voluntary private pension plans and to maintain premiums set by the corporation at the lowest level consistent with carrying out its obligations. Under ERISA, the termination of a single-employer defined-benefit plan results in an insurance claim with the single-employer program if the plan has insufficient assets to pay all benefits accrued under the plan up to the date of plan termination. PBGC may pay only a portion of the claim because ERISA places limits on the PBGC benefit guarantee. For example, PBGC generally does not guarantee annual benefits above a certain amount, currently about $44,000 per participant at age 65. Additionally, benefit increases in the 5 years immediately preceding plan termination are not fully guaranteed, though PBGC will pay a portion of these increases. The guarantee is limited to certain benefits, including so-called “shut-down benefits,”—significant subsidized early retirement benefits that are triggered by layoffs or plant closings that occur before plan termination. The guarantee does not generally include supplemental benefits, such as the temporary benefits that some plans pay to participants from the time they retire until they are eligible for Social Security benefits. Following enactment of ERISA, however, concerns were raised about the potential losses that PBGC might face from the termination of underfunded plans. To protect PBGC, ERISA was amended in 1986 to require that plan sponsors meet certain additional conditions before terminating an underfunded plan. (See app I.) For example, sponsors could voluntarily terminate their underfunded plans only if they were bankrupt or generally unable to pay their debts without the termination. Concerns about PBGC finances also resulted in efforts to strengthen the minimum funding rules incorporated by ERISA in the Internal Revenue Code (IRC). In 1987, for example, the IRC was amended to require that plan sponsors calculate each plan’s current liability, and make additional contributions to the plan if it is underfunded to the extent defined in the law. As discussed in a report, we issued earlier this year, concerns that the 30-year Treasury bond rate no longer resulted in reasonable current liability calculations has led both the Congress and the Administration to propose alternative rates for these calculations. Despite the 1987 amendments to ERISA, concerns about PBGC’s financial condition persisted. In 1990, as part of our effort to call attention to high- risk areas in the federal government, we noted that weaknesses in the single-employer insurance program’s financial condition threatened PBGC’s long-term viability. We stated that minimum funding rules still did not ensure that plan sponsors would contribute enough for terminating plans to have sufficient assets to cover all promised benefits. In 1992, we also reported that PBGC had weaknesses in its internal controls and financial systems that placed the entire agency, and not just the single- employer program, at risk. Three years later, we reported that legislation enacted in 1994 had strengthened PBGC’s program weaknesses and that we believed improvements had been significant enough for us to remove the agency’s high-risk designation. Since that time, we have continued to monitor PBGC’s financial condition and internal controls. For example, in 1998, we reported that adverse economic conditions could threaten PBGC’s financial condition despite recent improvements; in 2000, we reported that contracting weaknesses at PBGC, if uncorrected, could result in PBGC paying too much for required services; and this year, we reported that weaknesses in the PBGC budgeting process limited its control over administrative expenses. PBGC receives no direct federal tax dollars to support the single-employer pension insurance program. The program receives the assets of terminated underfunded plans and any of the sponsor’s assets that PBGC recovers during bankruptcy proceedings. PBGC finances the unfunded liabilities of terminated plans with (1) premiums paid by plan sponsors and (2) income earned from the investment of program assets. Initially, plan sponsors paid only a flat-rate premium of $1 per participant per year; however, the flat rate has been increased over the years and is currently $19 per participant per year. To provide an incentive for sponsors to better fund their plans, a variable-rate premium was added in 1987. The variable-rate premium, which started at $6 for each $1,000 of unfunded vested benefits, was initially capped at $34 per participant. The variable rate was increased to $9 for each $1,000 of unfunded vested benefits starting in 1991, and the cap on variable-rate premiums was removed starting in 1996. After increasing sharply in the 1980s, flat-rate premium income declined from $753 million in 1993 to $654 million in 2002, in constant 2002 dollars. (See fig. 1.) Income from the variable-rate premium fluctuated widely over that period. The slight decline in flat-rate premium revenue over the last decade, in real dollars, indicates that the increase in insured participants has not been sufficient to offset the effects of inflation over the period. Essentially, while the number of participants has grown since 1980, growth has been sluggish. Additionally, after increasing during the early 1980s, the number of insured single-employer plans has decreased dramatically since 1986. (See fig. 2.) The decline in variable-rate premiums in 2002 may be due to a number of factors. For example, all else equal, an increase in the rate used to determine the present value of benefits reduces the degree to which reports indicate plans are underfunded, which reduces variable-rate premium payments. The Job Creation and Worker Assistance Act of 2002 increased the statutory interest rate for variable-rate premium calculations from 85 percent to 100 percent of the interest rate on 30-year U.S. Treasury securities for plan years beginning after December 31, 2001, and before January 1, 2004. Investment income is also a large source of funds for the single-employer insurance program. The law requires PBGC to invest a portion of the funds generated by flat-rate premiums in obligations issued or guaranteed by the United States, but gives PBGC greater flexibility in the investment of other assets. For example, PBGC may invest funds recovered from terminated plans and plan sponsors in equities, real estate, or other securities and funds from variable-rate premiums in government or private fixed-income securities. According to PBGC, however, by policy, it invests all premium income in Treasury securities. As a result of the law and investment policies, the majority of the single-employer program’s assets are invested in U.S. government securities. (See fig. 3.) Since 1990, except for 3 years, PBGC has achieved a positive return on the investments of single-employer program assets. (See fig 4.) According to PBGC, over the last 10 years, the total return on these investments has averaged about 10 percent. For the most part, liabilities of the single-employer pension insurance program are comprised of the present value of insured participant benefits. PBGC calculates present values using interest rate factors that, along with a specified mortality table, reflect annuity prices, net of administrative expenses, obtained from surveys of insurance companies conducted by the American Council of Life Insurers. In addition to the estimated total liabilities of underfunded plans that have actually terminated, PBGC includes in program liabilities the estimated unfunded liabilities of underfunded plans that it believes will probably terminate in the near future. PBGC may classify an underfunded plan as a probable termination when, among other things, the plan’s sponsor is in liquidation under federal or state bankruptcy laws. The single-employer program has had an accumulated deficit—that is, program assets have been less than the present value of benefits and other liabilities—for much of its existence. (See fig. 5.) In fiscal year 1996, the program had its first accumulated surplus, and by fiscal year 2000, the accumulated surplus had increased to almost $10 billion, in 2002 dollars. However, the program’s finances reversed direction in 2001, and at the end of fiscal year 2002, its accumulated deficit was about $3.6 billion. PBGC estimates that this deficit grew to $5.7 billion by July 31, 2003. Despite this large deficit, according to a PBGC analysis, the single-employer program was estimated to have enough assets to pay benefits through 2019, given the program’s conditions and PBGC assumptions as of the end of fiscal year 2002. Losses since that time may have shortened the period over which the program will be able to cover promised benefits. The financial condition of the single-employer pension insurance program returned to an accumulated deficit in 2002 largely due to the termination, or expected termination, of several severely underfunded pension plans. In 1992, we reported that many factors contributed to the degree plans were underfunded at termination, including the payment at termination of additional benefits, such as subsidized early retirement benefits, which have been promised to plan participants if plants or companies ceased operations.These factors likely contributed to the degree that plans terminated in 2002 were underfunded. Factors that increased the severity of the plans’ unfunded liability in 2002 were the recent sharp decline in the stock market and a general decline in interest rates. The current minimum funding rules and variable-rate premiums were not effective at preventing those plans from being severely underfunded at termination. Total estimated losses in the single-employer program due to the actual or probable termination of underfunded plans increased from $705 million in fiscal year 2001 to $9.3 billion in fiscal year 2002, in 2002 dollars. In addition to $3.0 billion in losses from the unfunded liabilities of terminated plans, the $9.3 billion included $6.3 billion in losses from the unfunded liabilities of plans that were expected to terminate in the near future. Some of the terminations considered probable at the end of fiscal year 2002 have already occurred. For example, in December 2002, PBGC involuntarily terminated an underfunded Bethlehem Steel Corporation pension plan, which resulted in the single-employer program assuming responsibility for about $7.2 billion in PBGC-guaranteed liabilities, about $3.7 billion of which was not funded at termination. Much of the program’s losses resulted from the termination of underfunded plans sponsored by failing steel companies. PBGC estimates that in 2002, underfunded steel company pension plans accounted for 80 percent of the $9.3 billion in program losses for the year. The three largest losses in the single-employer program’s history resulted from the termination of underfunded plans sponsored by failing steel companies: Bethlehem Steel, LTV Steel, and National Steel. All three plans were either completed terminations or listed as probable terminations for 2002. Giant vertically integrated steel companies, such as Bethlehem Steel, have faced extreme economic difficulty for decades, and efforts to salvage their defined-benefit plans have largely proved unsuccessful. According to PBGC’s executive director, underfunded steel company pension plans have accounted for 58 percent of PBGC single-employer losses since 1975. The termination of underfunded plans in 2002 occurred after a sharp decline in the stock market had reduced plan asset values and a general decline in interest rates had increased plan liability values, and the sponsors did not make the contributions necessary to adequately fund the plans before they were terminated. The combined effect of these factors was a sharp increase in the unfunded liabilities of the terminating plans. According to annual reports (Annual Return/Report of Employee Benefit Plan, Form 5500) submitted by Bethlehem Steel Corporation, for example, in the 7 years from 1992 to 1999, the Bethlehem Steel pension plan went from 86 percent funded to 97 percent funded. (See fig. 6.) From 1999 to plan termination in December 2002, however, plan funding fell to 45 percent as assets decreased and liabilities increased, and sponsor contributions were not sufficient to offset the changes. A decline in the stock market, which began in 2000, was a major cause of the decline in plan asset values, and the associated increase in the degree that plans were underfunded at termination. For example, while total returns for stocks in the Standard and Poor’s 500 index (S&P 500) exceeded 20 percent for each year from 1995 through 1999, they were negative starting in 2000, with negative returns reaching 22.1 percent in 2002. (See fig. 7.) Surveys of plan investments by Greenwich Associates indicated that defined-benefit plans in general had about 62.8 percent of their assets invested in U.S. and international stocks in 1999. A stock market decline as severe as the one experienced from 2000 through 2002 can have a devastating effect on the funding of plans that had invested heavily in stocks. For example, according to a survey, the Bethlehem Steel defined-benefit plan had about 73 percent of its assets (about $4.3 billion of $6.1 billion) invested in domestic and foreign stocks on September 30, 2000. One year later, assets had decreased $1.5 billion, or 25 percent, and when the plan was terminated in December 2002, its assets had been reduced another 23 percent to about $3.5 billion—far less than needed to finance an estimated $7.2 billion in PBGC-guaranteed liabilities. Over that same general period, stocks in the S&P 500 had a negative return of 38 percent. In addition to the possible effect of the stock market’s decline, a drop in interest rates likely had a negative effect on plan funding levels by increasing plan termination costs. Lower interest rates increase plan termination liabilities by increasing the present value of future benefit payments, which in turn increases the purchase price of group annuity contracts used to terminate defined-benefit pension plans. For example, a PBGC analysis indicates that a drop in interest rates of 1 percentage point, from 6 percent to 5 percent, increased the termination liabilities of the Bethlehem Steel pension plan by about 9 percent, which indicates the cost of terminating the plan through the purchase of a group annuity contract would also have increased. Relevant interest rates may have declined 3 percentage points or more since 1990. For example, interest rates on long-term high-quality corporate bonds approached 10 percent at the start of the 1990s, but were below 7 percent at the end of 2002. (See fig. 8.) IRC minimum funding rules and ERISA variable rate premiums, which are designed to ensure plan sponsors adequately fund their plans, did not have the desired effect for the terminated plans that were added to the single- employer program in 2002. The amount of contributions required under IRC minimum funding rules is generally the amount needed to fund benefits earned during that year plus that year’s portion of other liabilities that are amortized over a period of years. Also, the rules require the sponsor to make an additional contribution if the plan is underfunded to the extent defined in the law. However, plan funding is measured using current liabilities, which a PBGC analysis indicates have been typically less than termination liabilities. Additionally, plans can earn funding credits, which can be used to offset minimum funding contributions in later years, by contributing more than required according to minimum funding rules. Therefore, sponsors of underfunded plans may avoid or reduce minimum funding contributions to the extent their plan has a credit balance in the account, referred to as the funding standard account, used by plans to track minimum funding contributions. While minimum funding rules may encourage sponsors to better fund their plans, the rules require sponsors to assess plan funding using current liabilities, which a PBGC analysis indicates have been typically less than termination liabilities. Current and termination liabilities differ because the assumptions used to calculate them differ. For example, some plan participants may retire earlier if a plan is terminated than they would if the plan continues operations, and lowering the assumed retirement age generally increases plan liabilities, especially if early retirement benefits are subsidized. With respect to two of the terminated underfunded pension plans that we examine, for example, a PBGC analysis indicates: The retirement age assumption for the Anchor Glass pension plan on an ongoing plan basis was 65 for separated-vested participants. However, the retirement age assumption appropriate for those participants on a termination basis was 58—a decrease of 7 years. According to PBGC, changing retirement age assumptions for all participants, including separated-vested participants, resulted in a net increase in plan liabilities of about 4.6 percent. The retirement age assumption for the Bethlehem Steel pension plan on an ongoing plan basis was 62 for those active participants eligible for unreduced benefits after 30 years of service. On the other hand, the retirement age assumption for them on a plan termination basis was 55 – the earliest retirement age. According to PBGC, decreasing the assumed retirement age from 62 to 55 approximately doubled the liability for those participants. Other aspects of minimum funding rules may limit their ability to affect the funding of certain plans as their sponsors approach bankruptcy. According to its annual reports, for example, Bethlehem Steel contributed about $3.0 billion to its pension plan for plan years 1986 through 1996. According to the reports, the plan had a credit balance of over $800 million at the end of plan year 1996. Starting in 1997, Bethlehem Steel reduced its contributions to the plan and, according to annual reports, contributed only about $71.3 million for plan years 1997 through 2001. The plan’s 2001 actuarial report indicates that Bethlehem Steel’s minimum required contribution for the plan year ending December 31, 2001, would have been $270 million in the absence of a credit balance; however, the opening credit balance in the plan’s funding standard account as of January 1, 2001, was $711 million. Therefore, Bethlehem Steel was not required to make any contributions during the year. Other IRC funding rules may have prevented some sponsors from making contributions to plans that in 2002 were terminated at a loss to the single- employer program. For example, on January 1, 2000, the Polaroid pension plan’s assets were about $1.3 billion compared to accrued liabilities of about $1.1 billion—the plan was more than 100 percent funded. The plan’s actuarial report for that year indicates that the plan sponsor was precluded by the IRC funding rules from making a tax-deductible contribution to the plan. In July 2002, PBGC terminated the Polaroid pension plan, and the single-employer program assumed responsibility for $321.8 million in unfunded PBGC-guaranteed liabilities for the plan. The plan was about 67 percent funded, with assets of about $657 million to pay estimated PBGC-guaranteed liabilities of about $979 million. Another ERISA provision, concerning the payment of variable-rate premiums, is also designed to encourage employers to better fund their plans. As with minimum funding rules, the variable-rate premium did not provide sufficient incentives for the plan sponsors that we reviewed to make the contributions necessary to adequately fund their plans. None of the three underfunded plans that we reviewed, which became losses to the single-employer program in 2002 and 2003, paid a variable-rate premium in the 2001 plan year. Plans are exempt from the variable-rate premium if they are at the full-funding limit in the year preceding the premium payment year, in this case 2000, after applying any contributions and credit balances in the funding standard account. Each of these four plans met this criterion. Two primary risks threaten the long-term financial viability of the single- employer program. The greater risk concerns the program’s liabilities: large losses, due to bankrupt firms with severely underfunded pension plans, could continue or accelerate. This could occur if returns on investment remain poor, interest rates stay low, and economic problems persist. More troubling for liabilities is the possibility that structural weaknesses in industries with large underfunded plans, including those greatly affected by increasing global competition, combined with the general shift toward defined-contribution pension plans, could jeopardize the long-term viability of the defined-benefit system. On the asset side, PBGC also faces the risk that it may not receive sufficient revenue from premium payments and investments to offset the losses experienced by the single-employer program in 2002 or that this program may experience in the future. This could happen if program participation falls or if PBGC earns a return on its assets below the rate it uses to value its liabilities. Plan terminations affect the single-employer program’s financial condition because PBGC takes responsibility for paying benefits to participants of underfunded terminated plans. Several factors would increase the likelihood that sponsoring firms will go bankrupt, and therefore will need to terminate their pension plans, and the likelihood that those plans will be underfunded at termination. Among these are poor investment returns, low interest rates, and continued weakness in the national economy and or specific sectors. Particularly troubling may be structural weaknesses in certain industries with large underfunded defined-benefit plans. Poor investment returns from a decline in the stock market can affect the funding of pension plans. To the extent that pension plans invest in stocks, the decline in the stock market will increase the chance that plans will be underfunded should they terminate. A Greenwich Associates survey of defined-benefit plan investments indicates that 59.4 percent of plan assets were invested in stocks in 2002. Clearly, the future direction of the stock market is very difficult to forecast. From the end of 1999 through the end of 2002, total cumulative returns in the stock market, as measured by the S&P 500, were negative 37.6 percent. In 2003, the S&P 500 has partially recovered those losses, with total returns (from a lower starting point) of 14.7 percent through the end of September. From January 1975, the beginning of the first year following the passage of ERISA, through September 2003, the average annual compounded nominal return on the S&P 500 equaled 13.5 percent. A decline in asset values can be particularly problematic for plans if interest rates remain low or fall, which raises plan liabilities, all else equal. The highest allowable discount rate for calculating current plan liabilities, based on the 30-year U.S. Treasury bond rate, has been no higher than 7.1 percent since April, 1998, lower than any previous point during the 1990s.Falling interest rates raise the price of group annuities that a terminating plan must purchase to cover its promised benefits and increase the likelihood that a terminating plan will not have sufficient assets to make such a purchase. An increase in liabilities due to falling interest rates also means that companies may be required under the minimum funding rules to increase contributions to their plans. This can create financial strain and increase the chances of the firm going bankrupt, thus increasing the risk that PBGC will have to take over an underfunded plan. Economic weakness can also lead to greater underfunding of plans and to a greater risk that underfunded plans will terminate. For many firms, slow or declining economic growth causes revenues to decline, which makes contributions to pension plans more difficult. Economic sluggishness also raises the likelihood that firms sponsoring pension plans will go bankrupt. Three of the last five annual increases in bankruptcies coincided with recessions, and the record economic expansion of the 1990s is associated with a substantial decline in bankruptcies. Annual plan terminations resulting in losses to the single-employer program rose from 83 in 1989 to 175 in 1991, and, after declining to 65 in 2000, the number reached 93 in 2001. Weakness in certain industries, particularly the airline and automotive industries, may threaten the viability of the single-employer program. Because PBGC has already absorbed most of the pension plans of steel companies, it is the airline industry, with $26 billion of total pension underfunding, and the automotive sector, with over $60 billion in underfunding, that currently represent PBGC’s greatest future financial risks. In recent years, profit pressures within the U.S. airline industry have been amplified by severe price competition, recession, terrorism, the war in Iraq, and the outbreak of Severe Acute Respiratory Syndrome (SARS), creating recent bankruptcies and uncertainty for the future financial health of the industry. As one pension expert noted, a potentially exacerbating risk in weak industries is the cumulative effect of bankruptcy; if a critical mass of firms go bankrupt and terminate their underfunded pension plans, others, in order to remain competitive, may also declare bankruptcy to avoid the cost of funding their plans. Because the financial condition of both firms and their pension plans can eventually affect PBGC’s financial condition, PBGC tries to determine how many firms are at risk of terminating their pension plans and the total amount of unfunded vested benefits. According to PBGC’s fiscal year 2002 estimates, the agency is at potential risk of taking over $35 billion in unfunded vested benefits from plans that are sponsored by financially weak companies and could terminate. Almost one-third of these unfunded benefits, about $11.4 billion, are in the airline industry. Additionally, PBGC estimates that it could become responsible for over $15 billion in shutdown benefits in PBGC-insured plans. PBGC uses a model called the Pension Insurance Modeling System (PIMS) to simulate the flow of claims to the single-employer program and to project its potential financial condition over a 10-year period. This model produces a very wide range of possible outcomes for PBGC’s future net financial position. To be viable in the long term, the single-employer program must receive sufficient income from premiums and investments to offset losses due to terminating underfunded plans. A number of factors could cause the program’s revenues to fall short of this goal or decline outright. For example, fixed-rate premiums would decline if the number of participants covered by the program decreases, which may happen if plans leave the system and are not replaced. Additionally, the program’s financial condition would deteriorate to the extent investment returns fall below the assumed interest rate used to value liabilities. Annual PBGC income from premiums and investments averaged $1.3 billion from 1976 to 2002, in 2002 dollars, and $2 billion since 1988, when variable-rate premiums were introduced. Since 1988, investment income has on average equaled premium income, but has varied more than premium income, including 3 years in which investment income fell below zero. (See fig. 9.) In 2001, total premium and investment was negative and in 2002 equaled approximately $1 billion. Premium revenue for PBGC would likely decline if the total number of plans and participants terminating their defined-benefit plans exceeded the new plans and participants joining the system. This decline in participation would mean a decline in PBGC’s flat-rate premiums. If more plans become underfunded, this could possibly raise the revenue PBGC receives from variable-rate premiums, but would also be likely to raise the overall risk of plans terminating with unfunded liabilities. Premium income, in 2002 dollars, has fallen every year since 1996, even though the Congress lifted the cap on variable-rate premiums in that year. The decline in the number of plans PBGC insures may cast doubt on its ability to increase premium income in the future. The number of PBGC- insured plans has decreased steadily from approximately 110,000 in 1987 to around 30,000 in 2002. While the number of total participants in PBGC-insured single-employer plans has grown approximately 25 percent since 1980, the percentage of participants who are active workers has declined from 78 percent in 1980 to 53 percent in 2000. Manufacturing, a sector with virtually no job growth in the last half-century, accounted for almost half of PBGC’s single-employer program participants in 2001, suggesting that the program needs to rely on other sectors for any growth in premium income. (See fig 10.) In addition, a growing percentage of plans have recently become hybrid plans, such as cash-balance plans that incorporate characteristics of both defined-contribution and defined- benefit plans. Hybrid plans are more likely than traditional defined-benefit plans to offer participants the option of taking benefits as a lump-sum distribution. If the proliferation of hybrid plans increases the number of participants taking lump sums instead of retirement annuities, over time this would reduce the number of plan participants, thus potentially reducing PBGC’s flat-rate premium revenue. Unless something reverses these trends, PBGC may have a shrinking plan and participant base to support the program in the future and that base may be concentrated in certain, potentially more vulnerable industries. Even more problematic than the possibility of falling premium income may be that PBGC’s premium structure does not reflect many of the risks that affect the probability that a plan will terminate and impose a loss on PBGC. While PBGC charges plan sponsors a variable-rate premium based on the plan’s level of underfunding, premiums do not consider other relevant risk factors, such as the economic strength of the sponsor, plan asset investment strategies, the plan’s benefit structure, or the plans demographic profile. Because these affect the risk of PBGC having to take over an underfunded pension plan, it is possible that PBGC’s premiums will not adequately and equitably protect the agency against future losses. The recent terminations of Bethlehem Steel, Anchor Glass, and Polaroid, plans that paid no variable-rate premiums shortly before terminating with large underfunded balances, lend some evidence to this possibility. Sponsors also pay flat-rate premiums in addition to variable-rate premiums, but these reflect only the number of plan participants and not other risk factors that affect PBGC’s potential exposure to losses. Full- funding limitations may exacerbate the risk of underfunded terminations by preventing firms from contributing to their plans during strong economic times when asset values are high and firms are in the best financial position to make contributions. It may also be difficult for PBGC to diversify its pool of insured plans among strong and weak sponsors and plans. In addition to facing firm- specific risk that an individual underfunded plan may terminate, PBGC faces market risk that a poor economy may lead to widespread underfunded terminations during the same period, which potentially could cause very large losses for PBGC. Similarly, PBGC may face risk from insuring plans concentrated in vulnerable industries that may suffer bankruptcies over a short time period, as has happened recently in the steel and airline industries. One study estimates that the overall premiums collected by PBGC amount to about 50 percent of what a private insurer would charge because its premiums do not account for this market risk. The net financial position of the single-employer program also depends heavily on the long-term rate of return that PBGC achieves from the investment of the program’s assets. All else equal, PBGC’s net financial condition would improve if its total net return on invested assets exceeded the discount rate it used to value its liabilities. For example, between 1993 and 2000 the financial position of the single-employer program benefited from higher rates of return on its invested assets and its financial condition improved. However, if the rate of return on assets falls below the discount rate, PBGC’s finances would worsen, all else equal. As of September 30, 2002, PBGC had approximately 65 percent of its single- employer program investments in U.S. government securities and approximately 30 percent in equities. The high percentage of assets invested in Treasury securities, which typically earn low yields because they are considered to be relatively “risk-free” assets, may limit the total return on PBGC’s portfolio. Additionally, PBGC bases its discount rate on surveys of insurance company group annuity prices, and because PBGC invests differently than do insurance companies, we might expect some divergence between the discount rate and PBGC’s rate of return on assets. PBGC’s return on total invested funds was 2.1 percent for the year ending September 30, 2002, and 5.8 percent for the 5-year period ending on that date. For fiscal year 2002, PBGC used an annual discount rate of 5.70 percent to determine the present value of future benefit payments through 2027 and a rate of 4.75 percent for payments made in the remaining years. The magnitude and uncertainty of these long-term financial risks pose particular challenges for the PBGC’s single-employer insurance program and potentially for the federal budget. In 1990, we began a special effort to review and report on the federal program areas we considered high risk because they were especially vulnerable to waste, fraud, abuse, and mismanagement. In the past, we considered PBGC to be on our high-risk list because of concerns about the program’s viability and about management deficiencies that hindered that agency’s ability to effectively assess and monitor its financial condition. The current challenges to PBGC’s single-employer insurance program concern immediate as well as long-term financial difficulties, which are more structural weaknesses rather than operational or internal control deficiencies. Nevertheless, because of serious risks to the program’s viability, we have placed the PBGC single-employer insurance program on our high-risk list. Although some pension professionals have suggested a “wait and see” approach, betting that brighter economic conditions improving PBGC’s future financial condition are imminent, agency officials and other pension professionals have suggested taking a more prudent, proactive approach, identifying a variety of options that could address the challenges facing PBGC’s single-employer program. In our view, several types of reforms might be considered to reduce the risks to the single-employer program’s long-term financial viability. These reforms could be made to strengthen funding rules applicable to poorly funded plans; modify program guarantees; improve the availability of information about plan investments, termination funding, and program guarantees. Several variations exist within these options and each has advantages and disadvantages. In any event, any changes adopted to address the challenge facing PBGC should provide a means to hold sponsors accountable for adequately funding their plans, provide plan sponsors with incentives to increase plan funding, and improve the transparency of the plan’s financial information. Funding rules could be strengthened to increase minimum contributions to underfunded plans and to allow additional contributions to fully funded plans. This approach would improve plan funding over time, while limiting the losses PBGC would incur when a plan is terminated. However, even if funding rules were to be strengthened immediately, it could take years for the change to have a meaningful effect on PBGC’s financial condition. In addition, such a change would require some sponsors to allocate additional resources to their pension plans, which may cause the plan sponsor of an underfunded plan to provide less generous wages or benefits than would otherwise be provided. The IRC could be amended to: Base additional funding requirement and maximum tax-deductible contributions on plan termination liabilities, rather than current liabilities. Since plan termination liabilities typically exceed current liabilities, such a change would likely improve plan funding and therefore reduce potential claims against PBGC. One problem with this approach is the difficulty plan sponsors would have determining the appropriate interest rate to use in valuing termination liabilities. As we reported, selecting an appropriate interest rate for termination liability calculations is difficult because little information exists on which to base the selection. Raise threshold for additional funding requirement. The IRC requires sponsors to make additional contributions under two circumstances: (1) if the value of plan assets is less than 80 percent of its current liability or (2) if the value of plan assets is less than 90 percent of its current liability, depending on plan funding levels for the previous 3 years. Raising the threshold would require more sponsors of underfunded plans to make the additional contributions. Limit the use of credit balances. For sponsors who make contributions in any given year that exceed the minimum required contribution, the excess plus interest is credited against future required contributions. Limiting the use of credit balances to offset contribution requirements might also prevent sponsors of significantly underfunded plans from avoiding contributions. Such limitations might also be applied based on the plan sponsor’s financial condition. For example, sponsors with poor cash flow or low credit ratings could be restricted from using their credit balances to reduce their contributions. Limit lump-sum distributions. Defined benefit pension plans may offer participants the option of receiving their benefit in a lump-sum payment. Allowing participants to take lump-sum distributions from severely underfunded plans, especially those sponsored by financially weak companies, allows the first participants who request a distribution to drain plan assets, which might result in the remaining participants receiving reduced payments from PBGC if the plan terminates. However, the payment of lump sums by underfunded plans may not directly increase losses to the single employer program because lump sums reduce plan liabilities as well as plan assets. Raise the level of tax-deductible contributions. The IRC and ERISA restrict tax-deductible contributions to prevent plan sponsors from contributing more to their plan than is necessary to cover accrued future benefits. Raising these limitations might result in pension plans being better funded, decreasing the likelihood that they will be underfunded should they terminate. Modifying Program Guarantee Would Decrease PBGC from underfunded plans. This approach could preserve plan assets Plan Underfunding Modifying certain guaranteed benefits could decrease losses incurred by by preventing additional losses that PBGC would incur when a plan is terminated. However, participants would lose benefits provided by some plan sponsors. ERISA could be amended to: Phase-in the guarantee of shutdown benefits. PBGC is concerned about its exposure to the level of shutdown benefits that it guarantees. Shutdown benefits provide additional benefits, such as significant early retirement benefit subsidies to participants affected by a plant closing or a permanent layoff. Such benefits are primarily found in the pension plans of large unionized companies in the auto, steel, and tire industries. In general, shutdown benefits cannot be adequately funded before a shutdown occurs. Phasing in guarantees from the date of the applicable shutdown could decrease the losses incurred by PBGC from underfunded plans. However, modifying these benefits would reduce the early retirement benefits for participants who are in plans with such provisions and are affected by a plant closing or a permanent layoff. Dislocated workers, particularly in manufacturing, may suffer additional losses from lengthy periods of unemployment or from finding reemployment only at much lower wages. Expand restrictions on unfunded benefit increases. Currently, plan sponsors must meet certain conditions before increasing the benefits of plans that are less than 60 percent funded. Increasing this threshold, or restricting benefit increases when plans reach the threshold, could decrease the losses incurred by PBGC from underfunded plans. Plan sponsors have said that the disadvantage of such changes is that they would limit an employer’s flexibility with regard to setting compensation, making it more difficult to respond to labor market developments. For example, a plan sponsor might prefer to offer participants increased pension payments or shutdown benefits instead of offering increased wages because pension benefits can be deferred—providing time for the plan sponsor to improve its financial condition—while wage increases have an immediate effect on the plan sponsor’s financial condition. PBGC’s premium rates could be increased or restructured to improve PBGC’s financial condition. Changing premiums could increase PBGC’s revenue or provide an incentive for plan sponsors to better fund their plans. However, premium changes that are not based on the degree of risk posed by different plans may force financially healthy companies out of the defined-benefit system and discourage other plan sponsors from entering the system. Various actions could be taken to reduce guaranteed benefits. ERISA could be amended to: Increase or restructure variable-rate premium. The current variable- rate premium of $9 per $1,000 of unfunded liability could be increased. The rate could also be adjusted so that plans with less adequate funding pay a higher rate. Premium rates could also be restructured based on the degree of risk posed by different plans, which could be assessed by considering the financial strength and prospects of the plan’s sponsor, the risk of the plan’s investment portfolio, participant demographics, and the plan’s benefit structure—including plans that have lump-sum, shutdown benefit, and floor-offset provisions. One advantage of a rate increase or restructuring is that it might improve accountability by providing for a more direct relationship between the amount of premium paid and the risk of underfunding. A disadvantage is that it could further burden already struggling plan sponsors at a time when they can least afford it, or it could reduce plan assets, increasing the likelihood that underfunded plans will terminate. A program with premiums that are more risk-based could also be more challenging for PBGC to administer. Increase fixed-rate premium. The current fixed rate of $19 per participant annually could be increased. Since the inception of PBGC, this rate has been raised four times, most recently in 1991 when it was raised from $16 to $19. Such increases generally raise premium income for PBGC, but the current fixed-rate premium has not reflected the changes in inflation since 1991. By indexing the rate to the consumer price index, changes to the premium would be consistent with inflation. However, any increases in the fixed-rate premium would affect all plans regardless of the adequacy of their funding. Improving the availability of information to plan participants and others about plan investments, termination funding status, and PBGC guarantees may give plan sponsors additional incentives to better fund their plans, making participants better able to plan for their retirement. ERISA could be amended to: Disclose information on plan investments. While some asset allocation information is reported by plans in form 5500 filings with the IRS, some plan investments may be made through common and collective trusts, master trusts, and registered investment companies, which make it difficult or impossible for participants and others to determine the asset classes–such as equity or fixed-income investments–for many plan investments. Improving the availability of plan asset allocation information may give plan sponsors an incentive to increase funding of underfunded plans or limit risky investments. Information provided to participants could also disclose how much of plan assets are invested in the sponsor’s own securities. This would be of concern because should the sponsor becomes bankrupt; the value of the securities could be expected to drop significantly, reducing plan funding. Although this information is currently provided in the plan’s form 5500, it is not readily accessible to participants. Additionally, if the defined-benefit plan has a floor-offset arrangement and its benefits are contingent on the investment performance of a defined- contribution plan, then information provided to participants could also disclose how much of that defined-contribution plan’s assets are invested in the sponsor’s own securities. Disclose plan termination funding status. Under current law, sponsors are required to report a plan’s current liability for funding purposes, which often can be lower than termination liability. In addition, only participants in plans below a certain funding threshold receive annual notices of the funding status of their plans. As a result, many plan participants, including participants of the Bethlehem Steel pension plan, did not receive such notifications in the years immediately preceding the termination of their plans. Expanding the circumstances under which sponsors must notify participants of plan underfunding might give sponsors an additional incentive to increase plan funding and would enable more participants to better plan their retirement. Disclose benefit guarantees to additional participants. As with the disclosure of plan funding status, only participants of plans below the funding threshold receive notices on the level of program guarantees should their plan terminate. Termination of a severely underfunded plan can significantly reduce the benefits participants receive. For example, 59- year old pilots were expecting annual benefits of $110,000 per year on average when the US Airways plan was terminated in 2003, while the maximum PBGC-guaranteed benefit at age 60 is $28,600 per year.Expanding the circumstances under which plan sponsors must notify participants of PBGC guarantees may enable more participants to better plan for their retirement. The current financial challenges facing PBGC and the array of policy options to address those challenges are more appropriately viewed within the context of the agency’s overall mission. In 1974, ERISA placed three important charges on PBGC: first, protect the pension benefits so essential to the retirement security of hard working Americans; second, minimize the pension insurance premiums and other costs of carrying out the agency’s obligations; and finally, foster the health of the private defined- benefit pension plan system. While addressing one or even two of these goals would be a challenge, it is a far more formidable endeavor to fulfill all three. In any event, any changes adopted to address the challenges facing PBGC should provide plan sponsors with incentives to increase plan funding, improve the transparency of the plan’s financial information, and provide a means to hold sponsors accountable for funding their plans adequately. Ultimately, however, for any insurance program, including the single-employer pension insurance program, to be self-financing, there must be a balance between premiums and the program’s exposure to losses. A variety of options are available to the Congress and PBGC to address the short-term vulnerabilities of the single-employer insurance program. Congress will have to weigh carefully the strengths and weaknesses of each option as it crafts the appropriate policy response. However, to understand the program’s structural problems, it helps to understand how much the world has changed since the enactment of ERISA. In 1974, the long-term decline that our nation’s private defined-benefit pension system has experienced since that time might have been difficult for some to envision. Although there has been some absolute growth in the system since 1980, active workers have comprised a declining percentage of program participants, and defined-benefit plan coverage has declined as a percentage of the national private labor force. The causes of this long-term decline are many and complex and have turned out to be more systemic, more structural in nature, and far more powerful than the resources and bully pulpit that PBGC can bring to bear. This trend has had important implications for the nature and the magnitude of the risk that PBGC must insure. Since 1987, as employers, both large and small, have exited the system, newer firms have generally chosen other vehicles to help their employees provide for their retirement security. This has left PBGC with a risk pool of employers that is concentrated in sectors of the economy, such as air transportation and automobiles, which have become increasingly vulnerable. As of 2002, almost half of all defined-benefit plan participants were covered by plans offered by firms in manufacturing industries. The secular decline and competitive turmoil already experienced in industries like steel and air transportation could well extend to the other remaining strongholds of defined-benefit plans in the future, weakening the system even further. Thus, the long-term financial health of PBGC and its ability to protect workers’ pensions is inextricably bound to this underlying change in the nature of the risk that it insures, and implicitly to the prospective health of the defined-benefit system. Options that serve to revitalize the defined benefit system could stabilize PBGC’s financial situation, although such options may be effective only over the long term. The more immediate challenge, however, is the fundamental consideration of the manner in which the federal government protects the defined-benefit pensions of workers in this increasingly risky environment. We look forward to working with the Congress on this crucial subject. Mr. Chairman, members of the committee, that concludes my statement. I’d be happy to answer any questions you may have. As part of the Employee Retirement and Income Security Act (ERISA) of 1974, the Congress established the Pension Benefit Guaranty Corporation (PBGC) to administer the federal insurance program. Since 1974, the Congress has amended ERISA to improve the financial condition of the insurance program and the funding of single-employer plans (see table 1).
More than 34 million workers and retirees in 30,000 single-employer defined benefit pension plans rely on a federal insurance program managed by the Pension Benefit Guaranty Corporation (PBGC) to protect their pension benefits, and the program's long-term financial viability is in doubt. Over the last decade, the program swung from a $3.6 billion accumulated deficit (liabilities exceeded assets), to a $10.1 billion accumulated surplus, and back to a $3.6 billion accumulated deficit, in 2002 dollars. Furthermore, despite a record $9 billion in estimated losses to the program in 2002, additional severe losses may be on the horizon. PBGC estimates that financially weak companies sponsor plans with $35 billion in unfunded benefits, which ultimately might become losses to the program. This testimony provides GAO's observations on the factors that contributed to recent changes in the single-employer pension insurance program's financial condition, risks to the program's long-term financial viability, and changes to the program that might be considered to reduce those risks. The single-employer pension insurance program returned to an accumulated deficit in 2002 largely due to the termination, or expected termination, of several severely underfunded pension plans. Factors that contributed to the severity of plans' underfunded condition included a sharp stock market decline, which reduced plan assets, and an interest rate decline, which increased plan termination costs. For example, PBGC estimates losses to the program from terminating the Bethlehem Steel pension plan, which was nearly fully funded in 1999 based on reports to IRS, at $3.7 billion when it was terminated in 2002. The plan's assets had decreased by over $2.5 billion, while its liabilities had increased by about $1.4 billion since 1999. The single-employer program faces two primary risks to its long-term financial viability. First, the losses experienced in 2002 could continue or accelerate if, for example, structural problems in particular industries result in additional bankruptcies. Second, revenue from premiums and investments might be inadequate to offset program losses experienced to date or those that occur in the future. Revenue from premiums might fall, for example, if the number of program participants decreases. Because of these risks, we recently placed the single-employer insurance program on our high-risk list of agencies with significant vulnerabilities to the federal government. While there is not an immediate crisis, there is a serious problem threatening the retirement security of millions of American workers and retirees. Several reforms might reduce the risks to the program's longterm financial viability. Such changes include: strengthening funding rules applicable to poorly funded plans, modifying program guarantees, restructuring premiums, and improving the availability of information about plan investments, termination funding, and program guarantees. Any changes adopted to address the challenge facing PBGC should provide a means to hold plan sponsors accountable for adequately funding their plans, provide plan sponsors with incentives to increase plan funding, and improve the transparency of plan information.
Financial literacy can be described as the ability to use knowledge and skills to manage money effectively. It includes the ability to understand financial choices, plan for the future, spend wisely, and manage the challenges that come with life events such as a job loss and saving for retirement or a child’s education. It can also encompass financial education—the process by which people improve their understanding of financial products, services, and concepts. Financial literacy has received increased attention in recent years because poor financial management and decision making can result in a lower standard of living and prevent families from reaching important long-term goals, such as buying a home. Financial literacy has broader public policy implications as well. For example, the recent financial crisis can be attributed, at least in part, to unwise decisions by consumers about the use of credit. Moreover, educating the public about the importance of saving may be critical to boosting our national saving rate, an important element to improving America’s economic growth. The population of adults with limited English proficiency in the United States is diverse with respect to immigration status, country of origin, educational background, literacy in native language, age, and family status. Generally, adults with limited English proficiency have immigrated to the United States and include legal permanent residents, naturalized citizens, refugees, and undocumented individuals, but some of these adults are native born. According to the Census Bureau’s 2006-2008 American Community Survey, about 12.4 million adults in the United States—or 5.5 percent of the total U.S. adult population—reported speaking English not well or not at all. As shown in table 1, our analysis of the Census data shows that Spanish was the native language of about 74 percent of those adults who did not speak English well or at all, with Chinese, Vietnamese, Korean, and Russian representing the next most common native languages. The number of American residents who reported speaking English not well or not at all grew by about 29 percent from the 2000 Census to the 2006-2008 American Community Survey data, as compared to those who reported speaking English very well or well, which grew by about 8 percent during the same timeframe. As shown in figure 1, populations with limited English proficiency tend to be more concentrated in certain parts of the country. More than 13 percent of California’s population was limited English proficient in 2008, as were more than 8 percent of the populations of Texas, Arizona, and New York. Persons in the United States with limited English proficiency appear to have lower incomes, on average, than fluent English speakers. While limited data exist specifically on the relationship between limited English proficiency and economic status, an analysis of 2007 American Community Survey data by the Migration Policy Institute found that 20 percent of those who spoke Spanish at home lived in poverty, as did 11.8 percent of those who spoke Asian or Pacific Island languages, and 21.1 percent of those who spoke other languages—as compared with a poverty rate of 11.2 percent among persons who spoke only English. A study by the Federal Reserve Bank of Chicago and the Brookings Institution, using 2004 data from the Congressional Budget Office, reported that the median income of a family headed by an immigrant (irrespective of English language proficiency) was $42,980—and $34,798 for a family headed by an immigrant from Latin America—compared with $54,686 for families headed by someone born in the United States. There are also indications that English proficiency correlates with educational attainment. For example, the Migration Policy Institute analysis found that 41 percent of adults in Spanish-speaking households in the United States did not finish high school, as compared with 12 percent of adults in English-only speaking households. Little prior research has been conducted specifically on the relationship between financial literacy and lack of proficiency in English. A 2005 research review by Lutheran Immigration and Refugee Service revealed almost no studies that examined how the immigrant experience influences financial literacy. Similarly, a literature search that we conducted found a significant amount of research on financial literacy in general and with regard to certain populations, but almost nothing that examined the role that language itself plays in financial literacy and financial education. Further, experts on financial literacy that we consulted in the nonprofit and federal sectors told us they were aware of little or no existing work specifically on the barriers to financial literacy faced by those with limited English proficiency. Some data do exist on financial literacy among Hispanic populations; however, the data do not generally distinguish between Hispanics who are and are not proficient in English. (About 70 percent of Hispanics in the United States self-report that they only speak English or they speak it well or very well, according to 2006-2008 American Community Survey data.) Among the studies that did not directly address English language ability, a 2009 survey by the Financial Industry Regulatory Authority found that Hispanic respondents were less likely than Asian Americans and non- Hispanic Caucasians to answer basic financial literacy questions correctly. Further, a 2003 survey on retirement issues by the Employee Benefit Research Institute found that 43 percent of Hispanic workers described their personal knowledge as “knowing nothing” about investing or saving for retirement, as compared to 12 percent for all workers in the United States. The institute also found that those with the least amount of knowledge were much more likely to have poor English language skills. Despite a lack of systematic research, a variety of stakeholders agree that a lack of proficiency in English can create significant barriers to financial literacy and to conducting everyday financial affairs, particularly given the complexity of financial products and the language often used to describe them. Staff we spoke with at financial institutions, federal agencies, and community and advocacy organizations that work with non-English speaking populations consistently told us that, in their experience, a lack of proficiency in English can be a significant barrier to financial literacy. Some explained that because language is the medium most used to access information and ideas, individuals lacking English language skills are limited in their ability to communicate with English-speaking financial service providers and to perform certain tasks necessary to initiate financial transactions and access financial tools and educational materials. For example: Completing key documents. Service providers and consumers with limited English proficiency told us that most financial documents are available only in English, which limits the ability of individuals with limited English proficiency to complete applications, understand and sign contracts, and conduct other everyday financial affairs without assistance. Several representatives from financial institutions told us that they are reluctant to provide translations of documents, such as disclosures and contracts, because of liability concerns. Managing bank accounts. Several bankers and others with whom we spoke noted that individuals who can not write in English find it difficult to write checks, which requires spelling out a dollar amount. For this reason, they said, debit card use has become popular among some individuals with limited English proficiency. The financial literacy study by Lutheran Immigration and Refugee Service noted that some refugees with limited literacy skills have difficulty using banks because they are not able to track deposits and withdrawals from their accounts. Resolving problems. Some consumers and service providers we spoke with said that limited English proficiency serves as a particular barrier when it comes to asking questions, such as inquiring about additional fees on credit card statements, or resolving problems, such as correcting erroneous billing statements. One consumer with limited English proficiency told us that although he speaks some English, he has difficulty understanding and negotiating automated telephone menu systems that one must often use to get assistance. Accessing financial education. Although there is a multitude of print material, Web sites, broadcast media, and classroom curricula provided by government, nonprofit, and private sources aimed at improving financial education, these resources are not always available in languages other than English. Financial education initiatives that are provided in languages other than English or that are aimed at particular immigrant populations do exist (see appendix II), but are more limited, especially for speakers of languages other than Spanish. Information and documents related to financial products tend to be very complex and can be hard to understand, even for native English speakers. The Financial Literacy and Education Commission, which is comprised of 20 federal agencies, has noted that personal financial management is an extremely complex matter that requires significant resources and commitment for consumers to understand and evaluate the multitude of financial products available in the marketplace. Moreover, the language used in financial documents can be extremely confusing. For instance, in 2006 we reported that credit card disclosures were often written well above the eighth-grade level at which about half of U.S. adults read. In a separate report, we similarly found that the disclosures made by various lenders to inform consumers of the risks of alternative mortgage products, such as interest-only loans, used language too complex for many adults to understand. A study by the Federal Trade Commission in 2007 on consumer mortgage disclosures reported that home loan borrowers were frequently confused by the disclosures about their mortgages and experienced significant misunderstandings about the terms of their loans. Having limited proficiency in English clearly exacerbates these challenges. In 2008, the National Council of La Raza sponsored four focus groups on credit issues and found that, for some Hispanics, language barriers compounded the difficulties that all participants faced in understanding the jargon and fine print of applications, contracts, and credit reports. The report by Lutheran Immigration and Refugee Service stated that advanced literacy skills are needed to understand the terms and conditions tied to most financial contracts and that it can take up to 5 years of regular English communication and practice for an immigrant who is not a native English speaker to achieve that level of advanced literacy. These findings were corroborated through focus groups we conducted with a wide range of individuals who provide financial and social services to populations who lack English proficiency. These providers frequently noted that the complexity and specialized language of financial services can make conducting financial affairs particularly challenging for individuals with limited English proficiency. In some cases, written financial materials are provided in languages other than English, but the translation may not be fully comprehensible if it is not written using colloquial or culturally appropriate language. A 2004 report by the National Council of La Raza noted that financial education materials are often translated from English to their literal equivalent in Spanish, which may be unintelligible or difficult for the reader to understand. The report recommended the use of translation that attempts to convey images or messages without regard to literal phrasing and that would account for cultural differences and capture and cla meaning of terms. Financial service providers we spoke with also noted that many specific terms used in the U.S. financial system—such as “subprime” and “401K”—do not always have equivalent terms in other languages, which can make translation particularly difficult. Some financial education materials for those with limited English proficiency provide English and translated versions side-by-side to help readers improve their financial vocabulary and recognize key terms. Interpretation—that is, oral translation—can also be problematic. The service providers we spoke with said that individuals with limited English proficiency frequently rely on friends and family members to serve as interpreters when dealing with financial affairs. However, interpreters may not be reliable because they may not fully understand or be able to explain the material. In particular, advocates for immigrant communities told us that adults often use as interpreters their minor children, who may not have the ability to accurately convey complex information. The Lutheran Immigration and Refugee Service report noted that many immigrants rely on relatives already residing in the United States to introduce them to the American financial system even though their family members may not have complete or accurate information themselves. In focus groups conducted for a report by Freddie Mac on Asian homebuyers, Chinese, Korean, and Vietnamese immigrants said that one of the key reasons they would rather use Asian real estate agents is because they preferred to conduct business in their native language, even when they were proficient in English. Some financial institutions have staff that can interpret or provide information in languages other than English, but it is unclear how widely this occurs. A report by the public interest group Appleseed on expanding and improving services for immigrants noted that the absence of culturally competent bilingual staff and services is a barrier to providing financial services to the low- and moderate-income immigrant market. In the same way, the financial education report by the National Council of La Raza stated that U.S. banks do not always employ bicultural, bilingual staff who can meet the diverse needs of Hispanics, especially immigrants. One provider told us that providing bilingual customer service can be challenging because even bilingual employees may not be able to accurately explain all the financial products offered by the institution. The representative of one financial services firm told us it forbids its staff from translating information or serving as interpreters for fear of providing incorrect or incomplete information. Similarly, an article in the trade journal Employee Benefit News cautioned that asking bilingual employees to present benefits information can be risky because the employee may lack financial expertise and the knowledge to explain industry-specific terms. Federal agency officials as well as financial literacy experts and staff from service providers such as nonprofit organizations, credit unions, and banks that work with immigrant communities informed us that factors other than language often serve as barriers to financial literacy for people with limited English proficiency. These factors can include a lack of familiarity with the U.S. financial system, cultural differences, general mistrust of financial institutions, and income and education levels. Lack of familiarity with the U.S. financial system. Some immigrants to the United States—some of whom are not proficient in English—lack familiarity with the U.S. financial system and its products, which may differ greatly from those in their native countries. These individuals may not have had exposure to mainstream financial institutions, such as banks, or may not have had experience with credit cards or retirement programs. A 2006 study sponsored by the Inter-American Development Bank noted that many new Hispanic immigrants have never had a bank account and that this is one of the obstacles that stand in the way of greater financial integration of recent Hispanic immigrants. Similarly, in focus groups conducted by Freddie Mac for its report on Asian homebuyers, new Asian immigrants cited unfamiliarity with the U.S. financial system as one challenge that they faced. Officials at the Internal Revenue Service told us that individuals with limited English proficiency often face addition al challenges understanding the U.S. tax system, in part because the tax system in their home country was very different. Further, in the report by Lutheran Immigration and Refugee Service, service providers noted that new immigrants with limited banking experience were generally unclear about what happens to money they deposit and how they can access these funds; many are also new to the very concept of a credit system. Additionally, one service provider told us that many new immigrants do not have their parents’ or previous generations’ financial experiences and lessons in the United States to learn from. The role of culture. Cultural differences can play a role in financial literacy and the conduct of financial affairs because different populations have dissimilar norms, attitudes, and experiences related to managing money. For example, in some cultures the practice of borrowing money and carrying debt is viewed negatively, which may deter immigrants from such cultures from taking loans to purchase homes or cars and build credit histories. In focus groups of Asian homebuyers conducted by Freddie Mac, most participants expressed an aversion to debt, and some participants said they were accustomed to spending cash rather than using credit cards because they do not like to be in debt. Religious traditions can also influence the use of credit. The Lutheran Immigration and Refugee Service report notes that Muslims who adhere to religious prohibitions against receiving and paying interest face challenges participating in such mainstream financial products as home mortgages and retirement plans. Mistrust of financial institutions. Some immigrants’ attitudes toward financial institutions have been shaped by their observations and experiences in their home countries. One academic paper on immigrants’ access to financial services noted that some U.S. immigrant households do not have bank accounts because of mistrust of banks, particularly if financial institutions in their home countries were marked by instability, lack of transparency, or fraud. A study sponsored by the Inter-American Development Bank similarly noted that negative attitudes towards depository financial institutions or a desire to keep financial information private has been an obstacle to using banks among some Hispanic immigrants. Income and socioeconomic status. Some studies have reported a correlation between income and financial literacy. As noted earlier, individuals with limited English proficiency have lower incomes, on average, than the U.S. population as a whole. In a 2008 survey of young American adults for the Jump$tart Coalition for Personal Financial Literacy, respondents whose family income was less than $20,000 per year received an average score of about 43 percent on a test of personal finance basics, as compared to a score of about 52 percent for students whose parents’ income was more than $80,000. A few financial service providers to immigrant communities we spoke with noted that low-income individuals may not have access to tools, such as educational courses and Internet sites, to improve their money management skills and overall financial literacy. The financial education report by the National Council of La Raza stated that the many Hispanic low-wage earners with work schedule restrictions or multiple jobs were limited in the ways in which they could participate in financial education programs. Education. As noted earlier, there is evidence that people in the United States with limited English proficiency are more likely to have low educational attainment. Moreover, overall levels of education can affect financial literacy. For example, the Jump$tart Coalition survey found a correlation between test scores on the basics of financial literacy and the educational attainment of test takers and their parents. Similarly, researchers with the Board of Governors of the Federal Reserve System who reviewed consumer survey data from the University of Michigan found a statistically significant correlation between respondents’ levels of formal education and their ability to correctly answer a series of true-false questions concerning savings, credit, and other general financial management matters. Staff at organizations that serve or advocate for immigrants told us that one factor in the ability to conduct financial affairs effectively is basic literacy—that is, the ability to read or write even in one’s native language. People with limited English proficiency who are not literate in any language face clear barriers to learning about and understanding financial issues, which can greatly impede their ability to conduct their everyday financial affairs. Some service providers and advocates told us that because factors other than language affect the financial literacy of people with limited English proficiency, translations of financial products and financial education materials may not be sufficient to address obstacles to financial literacy. While overcoming language barriers is important, they said, efforts to improve the financial literacy and well-being of people with limited English proficiency must also address underlying cultural and socioeconomic issues. Evidence suggests that people with limited English proficiency are more likely than the U.S. population as a whole not to have accounts at banks and other mainstream financial institutions. This condition is commonly referred to as being “unbanked” or “underbanked.” A 2009 national survey by the Federal Deposit Insurance Corporation (FDIC) found that 35.6 percent of households where only Spanish was spoken at home were unbanked, compared with 7.1 percent of households in which Spanish was not the only language spoken at home. Similarly, another study on the use of financial services by Hispanic immigrants found that they were significantly more likely to be unbanked than nonimmigrants, although it did not report specifically on the role of language. The FDIC survey found that among households who had never had a bank account, 9.1 and 6.9 percent cited that it was because “banks do not feel comfortable or welcoming” and “there are language barriers at banks,” respectively. Further, as noted earlier, immigrants may come from countries with corrupt or insecure financial systems, which can diminish their trust in mainstream financial institutions in the United States. In addition, persons with limited English proficiency who are undocumented may be further deterred from opening a bank account because of fear that the institution will share personal information with immigration authorities, according to the Lutheran Immigration and Refugee Service study and a few service providers we spoke with. According to FDIC, unbanked or underbanked consumers may pay excessive fees for basic financial services, be more vulnerable to loss or theft, and struggle to build credit histories and achieve financial security. FDIC has also reported that households that are unbanked are more likely to use alternative financial services, and about two-thirds of these households used pawn shops, payday loans, rent-to-own agreements, nonbank money orders, or check-cashing services in the past year. There are a number of reasons why populations with limited English proficiency may be more likely to use alternative financial services. First, alternative financial service providers, such as payday lenders and check-cashing outlets, generally cluster in and around neighborhoods with lower-income, minority, and Hispanic families, according to a 2004 analysis by the Urban Institute. The Lutheran Immigration and Refugee Service study said that in each of the five cities researchers visited, alternative financial services appeared to be widely available in neighborhoods where new immigrants lived, noting that immigrants were often aggressively targeted for these services through direct mail, telemarketing, and door-to-door sales. Some immigrants are attracted to alternative financial service providers because these institutions often cater specifically to their communities by, among other things, requiring little or no documentation, hiring staff who speak the language of their community, and offering convenient hours. However, concerns exist about the widespread use of such alternative financial service providers since the loan fees they charge are generally much higher than those charged by traditional financial institutions, and other terms and conditions of such loans are often unfavorable to the borrower. Further, evidence suggests that some populations with limited English language skills may be more susceptible to fraudulent and predatory practices. The Lutheran Immigration and Refugee Service report noted that some immigrants may trust financial service providers who speak their native language even if they do not understand the legalities of agreements they make. Service providers that work with limited English proficient communities told us that in some cases unscrupulous individuals use their ability to converse fluently in someone’s native language to build trust and then take advantage of the person. Some service providers described to us scams they have observed in which individuals with limited English skills are told the terms of an agreement orally in their native language and then asked to sign a written contract in English with terms different than those described. Credit counselors we spoke with said that having limited proficiency in English can make it difficult to understand the distinctions between various financial products. The report by the Appleseed organization notes that language and cultural barriers may also make it harder for immigrants, including those with limited English skills, to register a complaint about an abusive practice or product. The Federal Trade Commission has similarly noted that Hispanic immigrants, especially those with limited English proficiency, may be more susceptible to consumer fraud such as credit card fraud and other abusive practices. According to the agency, it pursued 37 cases involving Spanish-language frauds targeted at Hispanic consumers as part of its Hispanic Law Enforcement and Outreach Initiative between April 2004 and September 2006. The Federal Trade Commission has also translated dozens of its consumer education publications into Spanish, in part to reduce the susceptibility of Spanish-speaking consumers to fraud and scams. Several service providers we spoke with said that financial education can play an important role in helping consumers with limited English proficiency avoid abusive and predatory practices. We provided a draft of this report to the Department of the Treasury and the Federal Trade Commission for their review and neither agency had any comments. We are sending copies of this report to the Secretary of the Treasury, the Chairman of the Federal Trade Commission, and interested congressional committees. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our reporting objective was to examine the extent, if any, to which individuals with limited English proficiency are impeded in their financial literacy and conduct of financial affairs. To address this objective, we conducted a review of relevant literature related to financial literacy among immigrants and people with limited English proficiency. To identify existing studies, reports, articles, and surveys, we conducted searches of several databases, including Business & Industry, Banking Info Source, and EconLit, using key words to link financial literacy or financial education to language, English proficiency, and other concepts. We also asked for recommendations for studies, reports, and articles from academic experts and from representatives of organizations that address issues related to financial literacy or limited English proficient communities. We also conducted focused Internet searches, and we reviewed the bibliographies of reports we had already obtained to identify additional material. Each of the documentary sources cited in our report was reviewed for methodological strength and reliability and determined to be sufficiently reliable for our purposes. We performed our searches from August 2009 to April 2010. To describe the U.S. population of individuals with limited English proficiency, we obtained and analyzed data from the United States Census Bureau’s 2006-2008 American Community Survey and the 2000 U.S. Census. The Census Bureau does not define the term “limited English proficient.” As such, we developed our measures of the limited English proficient population based on questions in the American Community Survey that asked “Does this person speak a language other than English at home?”, “What is the language?”, and “How well does this person speak English?” For our purposes, we included in the limited English proficient estimate individuals over the age of 18 who self-reported that they speak English “not well” or “not at all”. We determined the total number of limited English proficient individuals as compared to the population that is proficient in English (those who reported they speak English “very well” or “well”) for both the 2006-2008 American Community Survey data and the 2000 U.S. Census data to show the growth over a period of time. Because the American Community Survey data is a probability sample based on random selections, this sample is only one of a large number of samples that might have been selected. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 4.5 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples that could have been drawn. In this report, all Public User Microdata Area level percentage estimates derived from the 2006-2008 American Community Survey have 95 percent confidence levels of plus or minus 4.5 percentage points or less, unless otherwise noted. We also conducted interviews at and gathered relevant studies and educational materials from federal agencies, organizations that provide financial literacy and education, and organizations that serve or advocate for populations with limited English proficiency. We interviewed staff at the Department of the Treasury’s Office of Financial Education, Federal Trade Commission, Internal Revenue Service, Federal Deposit Insurance Corporation, and the Department of Health and Human Services’ Office of Refugee Resettlement. We also interviewed representatives and gathered documentation from organizations that address financial literacy issues, including Consumer Action, Consumer Federation of America, and the Jump$tart Coalition for Personal Financial Literacy; organizations that represent the interests of populations that include individuals with limited English proficiency, including the Asian American Justice Center, National Coalition for Asian Pacific American Community Development, National Council of La Raza, and Southeast Asia Resource Action Center; and organizations that represent financial service providers, including the American Bankers Association, Credit Union National Association, and National Foundation for Credit Counseling. We also gathered information from three academic researchers who focus on issues related to financial literacy or limited English proficiency. In addition, we conducted a series of 10 focus groups to discuss the barriers that individuals with limited English proficiency may face in improving financial literacy and conducting their financial affairs. Information we collected from our focus groups and from the organizations we contacted provided context on the issues discussed, but this information is not generalizable to the entire populations represented by the focus groups. Further, our work may not have addressed all of the different perspectives of the many diverse cultures comprised by people with limited English proficiency in the United States. For each focus group, we used a series of semi-structured questions to learn about participants’ observations and experiences related to language and other barriers that impede financial literacy and how they address these challenges. The focus groups included: 1 with 11 limited English proficient consumers whose native language was Spanish and who were enrolled in English-language classes sponsored by the Hispanic Committee of Virginia; 1 with 11 limited English proficient consumers whose native language was Vietnamese and who utilize the services of Boat People SOS, a community- based organization based in the Washington, D.C. area; 4 that collectively included 20 staff members representing 15 financial institutions—large banks, community banks, and credit unions across the country whose clients include a large number of limited English proficient individuals with a wide range of native languages; 1 with 10 staff members representing 5 credit counseling and financial education agencies that provide services in multiple languages; 2 that collectively included 19 staff members representing 16 nonprofit community-based organizations that largely serve Hispanic communities; and 1 with 9 staff members representing 8 nonprofit community-based organizations that largely serve a variety of Asian communities. We conducted our work from August 2009 to May 2010 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objective. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objective and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions in this product. Many entities—including federal agencies, state and local governments, financial institutions and other private sector entities, schools, community- based agencies, and other nonprofit organizations—sponsor financial literacy and education initiatives. These efforts cover a wide variety of topics, target a range of audiences, and include classroom curricula, print materials, Web sites, broadcast media, and individual counseling. Highlighted below are selected examples of initiatives taken in the federal, nonprofit, and private sectors that seek to reach, in particular, individuals with limited English proficiency. About 20 different federal agencies operate numerous financial literacy programs and initiatives, several of which are targeted in part or whole at individuals with limited proficiency in English. The federal government’s multiagency Financial Literacy and Education Commission sponsors the My Money Web site (www.MyMoney.gov), which serves as a portal to more than 260 other federal financial education sites. The site has both English- and Spanish-language versions. The commission also sponsors a financial literacy telephone hotline that supports both English- and Spanish-speaking callers, as well as a financial literacy “tool kit” of publications available in English and Spanish. One of the most widely used federal financial literacy programs is the Federal Deposit Insurance Corporation’s “Money Smart,” a financial education curriculum designed to help individuals who are outside the financial mainstream develop financial skills and positive banking relationships. The instructor-led curriculum is offered in English, Spanish, Chinese, Hmong, Korean, Vietnamese and Russian, while the computer-based curriculum is available in English and Spanish. The Federal Trade Commission has developed more than 150 culturally appropriate educational publications in Spanish, according to an agency official, many of which cover financial topics, and its Spanish-language Web site received more than 1 million hits in fiscal year 2009. The Federal Trade Commission has also created Spanish- language videos on such topics as avoiding foreclosure rescue scams, and agency staff have provided interviews on financial issues to local and national Spanish-language media. A variety of community-based and national nonprofit organizations throughout the nation have financial literacy initiatives aimed at populations with limited proficiency in English. Many housing counseling agencies approved by the Department of Housing and Urban Development offer services in multiple languages. For example, the Minnesota Home Ownership Center has interpreters trained in homeownership issues and financial terminology and offers homebuyer-education classes in Spanish, Cambodian, Russian, and Hmong. Some community-based organizations also provide financial literacy information through their English as a Second Language programs, using resources such as the Money Smart curriculum, which includes such topics as the U.S. credit system and credit scores, and purchasing a home. Some agencies offer individual counseling. For example, two large nationwide providers of credit counseling and financial education can conduct telephone sessions in at least 15 languages directly and about 150 languages using a translation- services contractor. Many nonprofit agencies offer written publications in multiple languages intended to improve consumer financial literacy. For example, the National Coalition for Asian Pacific American Community Development publishes financial literacy resources in Chinese, Korean, Vietnamese, Hindi, Urdu, and Samoan. Similarly, Consumer Action posts to its Web site consumer financial information in several languages. Several financial institutions and other private sector entities have sponsored financial literacy initiatives aimed at consumers who speak languages other than English. For example, Freddie Mac’s CreditSmart®— a financial education curriculum designed to help consumers build and maintain better credit and become homeowners—is available in Spanish through CreditSmart Español and in Chinese, Korean, and Vietnamese through CreditSmart Asian. The American Bankers Association provides a newsletter in Spanish called Money Talks that is tailored to varying age groups and offers personal finance booklets in Spanish on such topics as saving, credit, budgeting, checking accounts, and mortgages. The Credit Union National Association offers “El Poder es Tuyo,” a Spanish-language personal-finance Web site that provides culturally relevant articles, videos, and worksheets specifically designed for Hispanics. In addition to the contact named above, Jason Bromberg (Assistant Director), Grant Mallie, Linda Rego, Rhonda Rose, Jennifer Schwartz, Andrew Stavisky, and Betsey Ward made key contributions to this report. Financial Literacy and Education Commission: Progress Made in Fostering Partnerships, but National Strategy Remains Largely Descriptive Rather Than Strategic. GAO-09-638T. Washington, D.C.: April 29, 2009. Financial Literacy and Education Commission: Further Progress Needed to Ensure an Effective National Strategy. GAO-07-100. Washington, D.C.: December 4, 2006. Credit Reporting Literacy: Consumers Understood the Basics but Could Benefit from Targeted Educational Efforts. GAO-05-223. Washington, D.C.: March 16, 2005. Highlights of a GAO Forum: The Federal Government’s Role in Improving Financial Literacy. GAO-05-93SP. Washington, D.C.: November 15, 2004. Centers for Medicare and Medicaid Services: CMS Should Develop an Agencywide Policy for Translating Medicare Documents Into Languages Other Than English. GAO-09-752R. Washington, D.C.: July 30, 2009. Medicare: Callers Can Access 1-800-MEDICARE Services, but Responsibility within CMS for Limited English Proficiency Plan Unclear. GAO-09-104. Washington, D.C.: December 29, 2008. VA Health Care: Facilities Have Taken Action to Provide Language Access Services and Culturally Appropriate Care to a Diverse Veteran Population. GAO-08-535. Washington, D.C.: May 28, 2008. No Child Left Behind Act: Education’s Data Improvement Efforts Could Strengthen the Basis for Distributing Title III Funds. GAO-07-140. Washington, D.C.: December 7, 2006. No Child Left Behind: Education Assistance Could Help States Better Measure Progress of Students with Limited English Proficiency. GAO-07-646T. Washington, D.C.: March 23, 2007. Child Care and Early Childhood Education: More Information Sharing and Program Review by HHS Could Enhance Access for Families with Limited English Proficiency. GAO-06-807. Washington, D.C.: August 17, 2006 (Spanish Summary: GAO-06-949; Chinese Summary: GAO-06-950; Korean Summary: GAO-06-951; Vietnamese Summary: GAO-06-952). No Child Left Behind Act: Assistance from Education Could Help States Better Measure Progress of Students with Limited English Proficiency. GAO-06-815. Washington, D.C.: July 26, 2006. Transportation Services: Better Dissemination and Oversight of DOT’s Guidance Could Lead to Improved Access for Limited English-Proficient Populations. GAO-06-52. Washington, D.C.: November 2, 2005 (Spanish Summary: GAO-06-185; Chinese Summary: GAO-06-186; Vietnamese Summary: GAO-06-187; Korean Summary: GAO-06-188).
According to Census data, more than 12 million adults in the United States report they do not speak English well or at all. Proficiency in reading, writing, speaking, and understanding the English language appears to be linked to multiple dimensions of adult life in the United States, including financial literacy--the ability to make informed judgments and take effective actions regarding the current and future use and management of money. The Credit Card Accountability, Responsibility and Disclosure Act of 2009 mandated GAO to examine the relationship between fluency in the English language and financial literacy. Responding to this mandate, this report examines the extent, if any, to which individuals with limited English proficiency are impeded in their financial literacy and conduct of financial affairs. To address this objective, GAO conducted a literature review of relevant studies, reports, and surveys, and conducted interviews at federal, nonprofit, and private entities that address financial literacy issues and serve people with limited English proficiency. GAO also conducted a series of focus groups with consumers and with staff at community and financial organizations. GAO makes no recommendations in this report. Staff at governmental, nongovernmental, and private organizations that work with non-English speaking populations consistently told us that, in their experience, a lack of proficiency in English can create significant barriers to financial literacy and to conducting everyday financial affairs. For example, service providers and consumers with limited English proficiency told us that because most financial documents are available only in English, individuals with limited English proficiency can face challenges completing account applications, understanding contracts, and resolving problems, such as erroneous bills. In addition, financial education materials--such as print material, Web sites, broadcast media, and classroom curricula--are not always available in languages other than English and, in some cases, Spanish. Further, information and documents related to financial products tend to be very complex and can use language confusing even to native English speakers. In some cases, written financial materials are provided in other languages, but the translation may not be clear if it is not written using colloquial or culturally appropriate language. Interpretation (oral translation) can also be of limited usefulness if the interpreter does not fully understand or is not able to explain the material, a problem exacerbated by the fact that adults with limited English proficiency often receive assistance from their minor children. Many factors other than language also influence the financial literacy of individuals with limited English proficiency. For example, immigrants may lack familiarity with the U.S. financial system and its products, which can differ greatly from those in their native countries. Cultural differences can also play a role in financial literacy because different populations have dissimilar norms, attitudes, and experiences related to managing money. For instance, in some cultures carrying debt is viewed negatively, which may deter immigrants from such cultures from taking loans to purchase homes or cars and building credit histories. In addition, some studies have reported a correlation between financial literacy and levels of income and education. As a result of these issues, some service providers and advocates suggested that efforts to improve the financial literacy of people with limited English proficiency go beyond translation and also address underlying cultural and socioeconomic factors. Evidence suggests that people with limited English proficiency are less likely than the U.S. population as a whole to have accounts at banks and other mainstream financial institutions. They are also more likely to use alternative financial services--such as payday lenders and check-cashing services--that often have unfavorable fees, terms, and conditions. Further, the Federal Trade Commission and immigrant advocacy organizations have noted that some populations with limited English language skills may be more susceptible to fraudulent and predatory practices. Several service providers we spoke with said that financial education can play an important role in helping consumers with limited English proficiency avoid abusive and predatory practices.
The ADA requires that fixed-route transit systems be made accessible to persons with disabilities—for example, by having lift and ramp equipped vehicles and announcing transit stops—but acknowledges that some disabled individuals are not able to use fixed-route services even with such accessibility features. To ensure that these individuals have equal access to public transportation, the ADA introduced a requirement that all public entities operating a fixed-route transit system must provide complementary and comparable ADA paratransit service. DOT issued final regulations to implement the ADA’s public transportation provisions on August 22, 1991.how transit agencies are to provide paratransit service; rather, they require such agencies to offer a level of service that is “comparable” to the level of service offered to the general public without disabilities. The regulations do not explicitly state Comparability is defined using six ADA minimum service requirements: service area, hours and days of service, fares, response time, trip purpose restrictions, and capacity constraints (see table 1). ADA paratransit service is generally an origin-to-destination service, meaning that paratransit vehicles pick up riders at their homes or other locations and take them to their desired destinations.are allowed to establish whether they will provide door-to-door service, wherein the driver offers assistance from the rider’s door to the vehicle (and comparable assistance at the destination), or curb-to-curb service, wherein assistance is not provided until the rider reaches the vehicle. According to DOT guidance, if the base model of service chosen is curb- to-curb, it may still be necessary to provide door-to-door service for those Transit agencies persons who require it in order to travel from their point of origin to their point of destination. All public transit agencies required to provide ADA paratransit services must establish a process for certifying individuals (including both local residents and visitors in the transit agencies’ respective service area) as ADA paratransit eligible. The ADA does not specify a process for how transit agencies determine eligibility, but it states the criteria that must be used to make the determination. A Transit Cooperative Research Program (TCRP) report on ADA paratransit eligibility certification practices found that most included a combination of the processes identified in table 2. While the ADA establishes minimum requirements for ADA paratransit, transit agencies are free to provide any additional level of service that they or their communities find appropriate. Types of additional services could include operating paratransit service beyond the fixed-route service area (which may include collecting fares for such trips in excess of twice the fixed-route service fare); providing service when the fixed-route system is not operating; and allowing same-day trip requests. According to the Center for Transportation Research, scheduling trips and dispatching vehicles are critical functions in providing ADA paratransit service. Scheduling ADA paratransit trips requires providers to match available vehicles to riders’ trip time and destination requests. In general, the process starts when a passenger calls to reserve a trip. At that time the passenger’s eligibility to receive the service is verified. Service must be provided on at least a next-day basis, though DOT’s ADA regulations permit transit agencies to accept advance reservations up to 14 days in advance. A destination request is then either entered into paratransit’s scheduling software or scheduled manually. On the day of the trip, the dispatcher creates a log sheet or manifest with the trip information for the driver, and the passenger is then picked up and dropped off (see fig. 1). Two federal agencies, DOT and the Department of Justice (DOJ), have key roles in monitoring, overseeing, and enforcing ADA requirements and providing technical assistance. Their general roles and responsibilities are as follows: Regulations. The Secretary of Transportation has sole authority to issue regulations to carry out the section of the ADA governing paratransit as a complement to fixed-route service. FTA has primary responsibility for administering these regulations. Oversight. As part of DOT’s oversight, FTA conducts general and special oversight reviews to evaluate the use of funds and adherence to civil rights laws, among other things, by recipients of Urbanized Area Formula Program grants (grantee) use of funds and adherence to civil rights laws, among other things. Civil rights reviews are one of five types of special reviews. FTA’s Office of Civil Rights is responsible for civil rights compliance and monitoring to ensure nondiscriminatory provision of transit services. ADA compliance reviews are a subset of civil rights special reviews, and can be targeted to one of three specific ADA areas: fixed-route compliance, rail station compliance, and ADA paratransit service compliance. FTA also provides technical assistance to transit agencies on fulfilling ADA requirements and investigates discrimination complaints filed by the public. Data. FTA is also responsible for maintaining the NTD, which was established by Congress to be the primary source for information and statistics on the nation’s transit systems. Recipients or beneficiaries of certain grants from FTA are required to submit data to the NTD on information such as their operating expenses, revenue, and services. Transit agencies reporting to NTD are required to provide two data points related to ADA paratransit services: the number of ADA paratransit trips provided annually and total annual expenditures for paratransit services that are attributable to ADA requirements. Enforcement. DOJ’s ADA enforcement responsibility generally involves either filing a federal lawsuit upon referral of a finding of noncompliance by DOT or by intervening in a privately filed lawsuit. DOJ may also resolve complaints of ADA noncompliance through settlement agreements and consent decrees with public transit agencies aimed at obtaining ADA compliance. There is no national level information to accurately measure the extent to which agencies providing ADA paratransit service are complying with ADA’s paratransit service requirements. However, as a condition of receiving federal funds, every transit agency has to self-certify and assure that it is complying with the DOT ADA regulations. According to FTA, this certification and assurance is its starting point for assessing transit agencies’ compliance with ADA requirements. Additionally, every Urbanized Area Formula Program grantee receives the general oversight FTA triennial review once every 3 years, which is one of the primary means FTA uses to evaluate whether grantees are meeting federal requirements. Although the triennial reviews include a review of the grantee’s compliance with ADA requirements, they provide no detailed information about ADA paratransit compliance because ADA compliance is 1 of 24 areas of transit operations covered in the review. According to FTA officials, negative triennial review findings may be considered in selecting transit agencies for a specialized ADA paratransit review. FTA’s specialized ADA paratransit compliance reviews examine multiple aspects of a transit agency’s paratransit service. Compliance reviews include an examination of the selected transit agency’s policies and standards for providing ADA complementary paratransit services. Reviews also include a determination of whether capacity constraints or areas of non-compliance exist. For example, a capacity constraint determination can be made by reviewing data on the selected transit agency’s on-time performance, on-board travel time, telephone-hold times, and trip denials. The review also examines compliance related to eligibility determinations, fares, and other ADA paratransit service requirements. FTA uses contractors to conduct the vast majority of its grantee oversight reviews, including specialized compliance reviews such as an ADA paratransit compliance review, although FTA is responsible for overseeing the work performed by its contractors. The results of compliance reviews are documented in written reports. Data about review findings are entered into FTA’s electronic oversight-tracking system, OTRAK. If a deficiency is identified in the course a compliance review, FTA requires the transit agency to take steps to correct the deficiency and monitors the transit agency’s progress. FTA can keep compliance reviews open and delay final report publication until problems are resolved, a resolution that could occur quickly or take years. (See fig. 2 for a description of the major steps in the compliance review process.) While compliance reviews represent an in-depth examination of a transit agency’s paratransit service, few transit agencies have been selected for an ADA paratransit compliance review. FTA’s most recent contract calls for only 10 compliance reviews of complementary paratransit services to be conducted from 2008 through 2011, or roughly 2 to 3 reviews per year. According to FTA officials, there are approximately 628 urbanized area fixed-route transit agencies that could be eligible for ADA compliance reviews. Officials told us that the limited number of ADA paratransit compliance reviews conducted each year is because of resource constraints and the time needed to complete an in-depth review. We analyzed 15 ADA paratransit compliance review final reports from January 2005 through April 2011 posted on the FTA website. We found that all 15 transit agencies reviewed from 2005 to 2011 had findings of non-compliance or recommendations related to ADA paratransit service. The following are examples of non-compliance findings and recommendations from the final reports we reviewed: Fourteen out of 15 agencies had findings of capacity constraints with their ADA paratransit service. For example, one agency was found to have polices around reservations and scheduling that lead to wait lists and difficulties ensuring scheduled ride times adhered to ADA requirements. Another agency had findings of non-compliance with its telephone access and hold times for trip scheduling because of inadequate staffing capacity. All 15 transit agencies reviewed also had findings related to their ADA paratransit eligibility processes. For example, one FTA compliance review found that a local transit agency was improperly denying ADA complementary paratransit service to some individuals who should be eligible. As a result, the agency proposed several changes to its eligibility determination process to correct the issues. In another final report, there were 24 findings or recommendations related to the transit agency’s eligibility processes. These findings ranged from information forms containing insufficient eligibility process detail to findings of non-compliance related to rider-eligibility suspension policy. These compliance reviews provide some information about how paratransit services are complying with ADA requirements, but they do not allow for a determination of the extent to which transit agencies overall are complying with ADA paratransit requirements. The findings of non-compliance in the reports discussed above are not generalizable to the 628 urbanized area fixed-route transit agencies, both because of the low number of reviews conducted and because the reviews were not conducted on a generalizable sample of transit agencies. Rather, FTA officials told us that the transit agencies that receive the specialized compliance reviews are specifically selected by FTA for review because FTA has reason to believe those agencies may be experiencing ADA paratransit compliance issues. Although FTA uses a risk-based approach to determine which transit agencies are selected for compliance reviews, FTA does not have a formalized or transparent selection process. According to FTA officials, transit agencies may be selected for an ADA paratransit compliance review for any number of reasons including rider complaints, which, according to FTA officials, are the best indicators available for making the most effective use of compliance resources, media coverage, findings from triennial reviews, legal actions that do not involve FTA, information from the transportation industry, congressional interest, and input from FTA regional offices. In selecting an agency for review, FTA may also consider the burden to a transit agency if it were to receive multiple oversight reviews, such as triennial reviews or state compliance reviews, in the same fiscal year. In those cases, FTA officials said they take steps to focus contractor and oversight resources to decrease burden on the transit agency, while still addressing possible compliance issues. FTA officials, however, could not provide documentation that outlines the compliance review selection criteria, and stated that there are no formalized criteria to guide the selection of transit agencies for review. As discussed above, the ADA paratransit compliance review process is documented, so the lack of documented selection criteria is notable. While the factors that FTA currently uses may be appropriate for selecting transit agencies for an ADA compliance review, FTA’s informal process does not adhere to our guidance on internal control standards related to the communication of policy, documentation of results, and monitoring We have previously and reviewing of grantee activities and findings.reported that these standards are critical to maintaining the thoroughness and consistency of compliance reviews. The documentation should be readily available for examination and appear in management directives, administrative policies, or operating manuals. Additionally, grant accountability guidance states that as part of an agency’s internal control system, preparing policies and procedures that outline what is expected in any particular program or process meets an important element of strong federal grant accountability best practices. In the past, FTA examined its process for selecting agencies for compliance reviews but decided to retain its informal selection process. Specifically, in 2006, FTA commissioned a report to help develop a method to prioritize transit systems for ADA compliance reviews, but FTA did not adopt the proposed methodology. proposed selection methodology was flawed because the selection criteria, such as select NTD data—fixed-route fleet size, ADA cost per trip, and changes in reported ADA expenses—were not indicators of non- compliance. FTA officials, however, said that the current selection factors bring problem agencies and other possible ADA compliance issues to their attention and serve as a good means for selecting agencies for review. Whatever criteria FTA deems appropriate to select transit agencies for review, it cannot ensure that those criteria will be consistently applied if they are not documented and communicated to FTA regional offices, contractors, and transit agencies. Federal Transit Administration, Team MAC Final Report on ADA Program Management Support Contract No. DTFT60-05-R-00013 (October 31, 2006). website. These final reports account for reviews conducted from February 2004 through July 2010. Even though there are no official FTA requirements for when a report must be completed and posted on the website, FTA officials acknowledged that timeliness of a report’s completion and online posting is a problem area that they are actively working to address. FTA officials said the backlog of reports needing to be posted online was because of technical issues. According to FTA, all finalized ADA compliance review reports are publicly available documents. However, if the reports have not been posted to FTA’s website, then the only way to access their content is through a Freedom of Information Act request, which requires time and financial resources. Transit agencies and industry groups told us that they look to these compliance reviews as a form of guidance on FTA’s interpretation of ADA requirements. Particularly, because FTA conducts a limited number of ADA paratransit compliance reviews, both transit agencies and FTA would benefit from posting final compliance reports in a timely manner. According to our survey of transit agencies, demand for ADA paratransit trips increased from 2007 to 2010. Our survey indicates that demand increased across multiple measures, such as more riders registered to use ADA paratransit service and more ADA paratransit trips provided. Most transit agencies—about 73 percent—experienced an increase in the number of individuals registered to use ADA paratransit service.addition, about 64 percent of transit agencies provided more ADA In paratransit trips in 2010 than in 2007. From 2007 to 2010, the average number of individuals registered to use ADA paratransit service at a transit agency increased by 12 percent, and the average number of ADA paratransit trips provided by a transit agency increased 7 percent (see fig. 3). Increases in demand for ADA paratransit services were driven by the 10 largest transit agencies. ADA paratransit ridership at these transit agencies is substantially greater than at other transit agencies. The average number of individuals registered to use ADA paratransit services at the 10 largest transit agencies increased 22 percent from 2007 to 2010, from an average of 34,758 individuals in 2007 to 42,357 individuals in 2010, compared to a marginally significant average increase of 9 percent at other transit agencies not among the 10 largest agencies. For the 10 largest transit agencies, the average number of riders taking at least one ADA paratransit trip per year increased 27 percent, from an average of 14,202 riders in 2007 to 18,095 riders in 2010. In addition, the average number of ADA paratransit trips provided by these 10 transit agencies increased 31 percent, from an average of 1,533,707 trips in 2007 to 2,006,327 trips in 2010. Other transit agencies did not experience significant increases in the average number of riders taking at least one ADA paratransit trip per year or the number of ADA paratransit trips provided. According to transit agency officials we spoke with, demand for ADA paratransit trips has increased for several reasons. One frequently cited reason was that other organizations that provide or previously provided transportation services for individuals with disabilities have increasingly relied on ADA paratransit services for transportation—a trend sometimes referred to as “ride shedding.” For example, one transit agency official said that demand for ADA paratransit trips increased dramatically when local nonprofit organizations discontinued their dial-a-ride transportation services. Riders who formerly used the dial-a-ride services now use the ADA paratransit system. In addition, many transit agency officials we spoke with told us that ADA paratransit demand has increased because of the growing elderly population. Officials pointed to the growth in the elderly population as a reason why more people are living with disabilities and need ADA paratransit services. According to 2010 U.S. census data, the population aged 65 and older grew 15 percent from 2000 to 2010, compared to growth of about 10 percent in the overall population, and the prevalence of disability increased with successively older age groups. Some transit agency officials said that ADA paratransit demand has also increased because of overall population growth, an increasing number of individuals with disabilities living independently, and improvements in ADA paratransit service that have made the service more attractive to riders. ADA paratransit trips are much more costly to provide than fixed-route trips. Based on our survey results, the average cost of providing an ADA paratransit trip in 2010 was $29.30, an estimated three and a half times more expensive than the average cost of $8.15 to provide a fixed-route trip (see fig. 4). Survey respondents reported average per-trip costs for ADA paratransit in 2010 ranging from $11.11 to $69.25. The costs of providing ADA paratransit and fixed-route services differed between the largest transit agencies and other transit agencies. On average, an ADA paratransit trip cost $42.23 in 2010 for the 10 largest transit agencies, compared to $28.94 per trip for other transit agencies. For fixed-route trips, average costs in 2010 were lower for the 10 largest transit agencies than for other transit agencies: $3.82 for the largest transit agencies compared to $8.24 for others. Despite these differences, the 10 largest transit agencies and other transit agencies spent similar portions of their budgets on providing ADA paratransit services in 2010, 14 percent and 18 percent on average, respectively. Transit agencies have implemented a number of actions aimed at addressing the growing demand for ADA paratransit trips and reducing the costs of ADA paratransit services. Types of actions agencies are taking include coordinating efforts among various service providers, transitioning passengers from ADA paratransit to fixed-route service, improving the accessibility of fixed-route service, ensuring more accurate eligibility determinations, realigning paratransit service with minimum ADA paratransit requirements, and improving technology for scheduling and dispatch. To meet the needs of ADA paratransit-eligible riders, numerous transit agencies that we surveyed and interviewed reported that they are coordinating with health and human services, and other local transportation providers. According to our survey of transit agencies, about 59 percent of transit agencies are coordinating with health and human services providers in order to improve ADA paratransit services or address the costs of providing service. Also, about 44 percent of transit agencies are coordinating with other local transit agencies, including 6 of the 10 largest transit agencies. Some transit agency officials we interviewed also told us that they coordinate transportation services. For example, Lane Transit District (Lane County, Oregon) operates a one-call center. The call center coordinates a variety of transportation services, including ADA paratransit service and transportation for seniors and people with low incomes. According to an official, the one-call center makes it easier for people to access services and the agency benefits from efficiencies associated with providing more group trips. Two of the transit agency officials that we spoke with said that they would like to implement coordination efforts, but have been unable to get various parties to come together. In June 2012, we reported several challenges that state and local entities face in their efforts to coordinate services for the transportation disadvantaged (a broader group than ADA paratransit riders), including insufficient federal leadership, changes to state legislation and policies, and limited financial resources in the face of growing unmet needs. Some transit agencies are transitioning passengers from ADA paratransit services to fixed-route service in an effort to manage demand and contain a portion of their costs. According to FTA officials and others, fixed-route systems have become much more accessible since the enactment of the ADA, and nearly all fixed-route buses are now accessible to and usable This improved by persons with disabilities, including wheelchair users.accessibility makes it possible to transition some passengers from paratransit to fixed-route services. Based on our literature review, one of the most effective and long-lasting techniques that can be employed to reduce the demand for ADA paratransit is transitioning paratransit passengers to fixed-route service through travel training and offering incentives to encourage existing paratransit passengers to use the fixed- route transit service, where possible, which we explain more fully below. One source described this as a “win-win” proposition for both the transit agency and the individual. The transit agency is able to use excess capacity on its fixed-route system at minimal cost to the agency. By using the fixed-route system, the passenger may be able to access a wider variety of services and destinations, does not have to pre-schedule travel on paratransit vehicles, and could save money by paying lower fares for fixed-route trips. To assist ADA paratransit riders in transitioning to fixed-route service, several transit agencies are using travel-training programs that help show riders on how they can use the fixed-route system. Our survey results show that about 55 percent of transit agencies use travel training as a demand management and cost containment strategy. Some transit agency officials stated that travel training may reduce costs. For example, King County Metro (Seattle, Washington) reported spending about $573,000 in 2011 to provide travel training to over 300 individuals, but estimated it saved about $1,290,000 in paratransit costs by successfully transitioning paratransit patrons to the fixed-route system. Similarly, officials from New Jersey Transit (Newark, New Jersey) told us that they have been successful in getting riders to use the fixed-route system by offering travel training. They have not quantified how many trips are being diverted from paratransit, but told us that surveys of those who have taken travel training show that many are using the fixed-route system. Some transit agencies offer financial incentives to ADA paratransit eligible individuals to use fixed-route transit services. These incentives are also sometimes extended to persons accompanying the ADA paratransit eligible rider, which may encourage use of the fixe-route system by persons who cannot use it independently. Some (5 of the 20) transit agencies we interviewed said that they offer fixed-route fare incentives. For example, Access Services (Los Angeles County, California) offers paratransit riders free fixed-route trips on fixed-route systems throughout the county. According to Access Services, in July 2012, ADA paratransit registrants took 2.1 million trips on Los Angeles County fixed-route systems. On an annual basis—assuming that over 25 million trips will be taken per year at a cost of $20 per trip—this represents a cost savings of $500 million, according to Access Services. Officials from Bay Area Rapid Transit (Oakland, California) also told us that they offer fare incentives to get ADA paratransit riders to use the fixed-route system. Our survey results showed that over 62 percent of transit agencies reported making accessibility improvements to their fixed-route systems since 2007. Additionally, one transit agency that we spoke with said that it has made changes to its vehicles to accommodate larger wheelchairs or mobility devices. Others have implemented feeder service as a way to transport passengers from their homes or other pick-up locations to fixed- route bus or train stops. However, according to FTA officials, one of the biggest challenges to using fixed route is the inaccessibility (or nonexistence) of sidewalks and pedestrian infrastructure. For example, lack of sidewalks may prevent persons with disabilities from traveling to fixed-route bus stops, thereby increasing the need for ADA paratransit services. However, such pedestrian improvements rarely fall under the transit system’s direct influence or control. To assist transit agencies in addressing these improvements, FTA issued a policy in 2011 that simplifies the process for grantees to qualify for FTA funding for pedestrian improvements that are related to transit service. Additionally, transit agencies are required to maintain accessibility features (e.g., elevators and bus lifts) in good working order and to follow ADA policies, such as making stop announcements, needed to make the fixed route usable to persons with disabilities. A number of transit agencies are seeking to more accurately determine riders’ eligibility for ADA paratransit trips to manage changes in paratransit demand and costs. According to the National Council on Disability, determining eligibility for each specific trip request is one strategy that transit agencies are using to have at least some paratransit riders’ trips accommodated on the fixed-route system rather than through ADA paratransit. According to our survey, almost 49 percent of transit agencies have implemented a more rigorous eligibility process in an effort to manage costs. About 36 percent of survey respondents use an in-person functional assessment, including 9 of the 10 largest transit agencies.Additionally, some of the transit agency officials we spoke with use the eligibility process to manage demand for paratransit service and help ensure that the service remains available for those passengers who need it. These transit agencies are using in-person interviews or functional assessments to determine whether a disability prevents the applicant from using the fixed-route system. For example: Washington Metropolitan Area Transit Authority (Washington, D.C. area) certifies its riders’ eligibility using in-person interviews and functional assessments. According to an official, the process begins with a staff consultation in which the customer’s travel needs and transit knowledge are evaluated. The eligibility determination is then made based on: application data (including medical diagnoses from the customer’s health care provider), the interview, and a functional assessment with physical and, when needed, cognitive components. Metro Mobility (St. Paul, Minnesota) uses a two-part paper application, with an in-person functional assessment and interview, if needed. The application includes a self-reported questionnaire and a professional verification of disability. In order to reduce costs, over 18 percent of the transit agencies we surveyed have realigned their paratransit service area to better match the minimum ADA paratransit requirement. Additionally, about 22 percent have realigned their paratransit service hours to better match the minimum ADA paratransit requirements. Officials at StarTran (Lincoln, Nebraska) told us that they are proposing to reduce their paratransit service area to the required ¾ mile of fixed-route service and said that reducing the paratransit service area would result in considerable cost savings. In 2010, King County Metro projected the estimated savings if the agency aligned its service area, hours, service level, and fares with the ADA paratransit minimums. The estimated savings included $2.1 million if the ADA minimum service area was adopted; $700,000 if service hours were adjusted; $1.5 million for moving from a door-to-door to a curb-to-curb policy; and a savings of $1.2 million in addition to $741,000 increased revenues if fares were adjusted to the basic adult fixed-route fare. Using available technologies such as computerized scheduling and dispatching software can help lower ADA paratransit service costs by increasing service efficiency, according to transit agency officials we spoke with and various studies. Officials at a majority of the transit agencies we spoke with (14 of 20) said that they are using available technologies. For example, Dallas Area Rapid Transit (Dallas, Texas) is using technology to help handle an increasing number of trips, clients, and vehicles. It has an automated system that allows riders to request and confirm trips over the phone without the need of a call taker. This approach makes trip requests more convenient for riders and less labor-intensive for the agency, thereby improving effectiveness and efficiency, according to transit officials. In 2007, New York City Transit made improvements to its automatic scheduling and dispatching system which schedules up to 22,500 paratransit trips on weekdays. The improvements feature an intelligent transportation-system automatic-vehicle-location and monitoring project to equip all vehicles with vehicle-location and mobile-data computers, thus freeing dispatchers to take corrective action based on accurate data and to communicate scheduling changes to drivers in real-time. The ADA’s mandate for paratransit services has been an important catalyst for progress in providing equal access to public transportation for all individuals. Overseeing the provision of these services at hundreds of transit agencies is an important responsibility for FTA. ADA paratransit compliance reviews—although limited in number—examine compliance with ADA paratransit service requirements. As we noted, FTA selects agencies for review for various reasons, including rider complaints, media coverage, and findings from triennial reviews. However, FTA has no formalized criteria to guide the selection of transit agencies for review. Without a formalized, documented process for selecting transit agencies for compliance reviews, FTA is not following GAO’s internal controls and grantee-oversight best practices. FTA cannot ensure an effective oversight process if critical elements of internal controls are not present. FTA’s process is to make publicly available, via its website, final ADA compliance review reports that contain findings from completed compliance reviews. However, nine final review reports—conducted from 2004 to 2010—have not been posted to FTA’s website. Even though there are no time frames governing when a report must be posted, timelier posting of these reviews would be beneficial to transit agencies and industry groups that use these compliance reviews as a form of guidance on FTA’s interpretation of ADA requirements. Having these publicly available, as soon as possible, could assist FTA in its oversight of transit agencies and assist transit agencies in their compliance efforts. Finally, transit agencies reporting to NTD are required to provide limited data related to ADA paratransit services, including the number of ADA paratransit trips provided annually and total annual expenditures attributable to ADA paratransit requirements. We found that the required data fields were often incomplete. For example, for data from 2005 to 2010, the most recent year available, about 32 percent of transit agencies reporting to NTD did not provide data in one or more years on the number of ADA trips provided. Because the NTD is intended to provide timely and accurate information to Congress and others, FTA would benefit from advising transit agencies on how to accurately and consistently provide the required data. We recommend that the Secretary of Transportation direct the FTA Administrator to take the following actions: 1. To help ensure that FTA’s ADA paratransit compliance reviews adhere to GAO recommended internal controls and grantee oversight best practices, document and make publicly available a formal selection approach for selecting transit agencies for review. 2. To help transit agencies and stakeholders have access to up-to-date ADA paratransit compliance reviews and compliance findings, post the backlog of ADA compliance review final reports on FTA’s website and establish processes for the timely posting of future compliance review reports. 3. To improve NTD data collection for ADA paratransit, provide guidance to transit agencies on how to accurately complete existing ADA paratransit fields. We provided DOT with a draft of this report and the e-supplement for review and comment. DOT officials neither agreed nor disagreed with our recommendations, but provided technical comments, which we incorporated as appropriate. DOT did not have any comments on the e- supplement. DOT officials stated that FTA uses consumer complaints as programmatic criteria to identify areas of potential noncompliance and considers complaints to be the best available indicator of where to target its limited investigative resources. DOT officials reiterated that paratransit data collected for the NTD are intended to provide information useful for FTA’s monitoring of the size of ADA paratransit services relative to demand response services. According to DOT officials, these data are not intended to assess overall ADA paratransit compliance. We are sending copies of this report to interested congressional committees, the Secretary of Transportation, and the Administrator of the Federal Transit Administration. We also will make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact David Wise at 202-512-2834 or wised@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. This report addresses the following three objectives: (1) What is known about the extent of compliance with the Americans with Disabilities Act of 1990 (ADA) paratransit requirements? (2) What changes have occurred in ADA paratransit demand and costs since 2007? (3) What actions are agencies taking to help address changes in the demand for and costs of ADA paratransit service? To determine what is known about the extent of compliance with ADA paratransit requirements, we reviewed ADA regulations, the Federal Transit Administration (FTA) guidance on the regulations, and FTA’s ADA compliance reports from 2005 to 2011. In addition, we examined FTA’s National Transit Database to assess the extent to which it contains data related to ADA paratransit services and transit agencies’ compliance with ADA paratransit requirements. We also interviewed FTA officials about the various processes it uses to assess compliance and consulted our prior work on transportation accessibility and FTA’s oversight processes. To identify changes that have occurred in ADA paratransit demand and costs since 2007, we examined data from FTA’s National Transit Database on the number of ADA paratransit trips provided annually and total annual expenditures attributable to ADA complementary paratransit requirements. In reviewing National Transit Database data, we determined that they were not reliable for our purposes. Appendix II contains a more detailed discussion of our data reliability assessment. To address our second and third objectives, we conducted semi structured interviews with 20 transit agencies regarding their provision of ADA paratransit services. We based our selection of these transit agencies based on a variety of characteristics, including geographic diversity, size of ADA paratransit system, and transit agencies deemed notable for their ADA paratransit systems. Because we used a non- generalizable sample of transit agencies, findings from these interviews cannot be used to make inferences about other transit agencies. However, we determined that the selection of these transit agencies was appropriate for our design and objectives and that the selection would generate valid and reliable evidence to support our work. Table 3 provides more detailed information about the transit agencies we interviewed. We also interviewed representatives from relevant industry and disability advocacy groups, including the following: American Public Transportation Association, Community Transportation Association of America, Disability Rights Education and Defense Fund, Easter Seals Project ACTION, National Independent Living Council, and Texas Statewide Council on Independent Living. Moreover, to identify the actions that transit agencies are taking to help address changes in costs of and demand for ADA paratransit service, we reviewed relevant literature pertaining to leading practices for addressing costs and demand of paratransit services. We conducted a Web-based survey of transit agencies from May through July, 2012 to address the second and third objectives questions. Results of this survey and the survey instrument have been published in GAO-13-18SP ADA PARATRANSIT SERVICES: Survey of Public Transit Agency Officials on Services and Costs, an E-supplement to GAO-13-17 and can be found at the GAO website. We constructed our population of transit agencies for our survey sample using 2010 data in FTA’s National Transportation Database (NTD). Using NTD data, we determined that there were 546 agencies that provided demand response services, which according to FTA, was the mode of service most likely to correlate with provision of ADA paratransit services. The total survey sample was 145 transit agencies. The survey sample was composed of two strata. One was a certainty sample of 10 transit agencies that, based on NTD data, were the top 10 transit agencies based on service area population in 2010, accounting for 29 percent of the total service area population in our total sample. The second stratum was ordered by population size and selected randomly to obtain representation from agencies with populations of various sizes. For this stratum we randomly selected 135 transit agencies that provide demand-response service from the remaining population after the certainty sample, a population of 536 agencies. We obtained completed questionnaires from 112 respondents, or about 77 percent of our sample. The survey results can be generalized to the population of transit agencies that provide demand-response service. And as noted above, we are issuing an electronic supplement to this report that shows a more complete tabulation of our survey results. We developed a questionnaire to obtain information about transit agencies’ provision of ADA paratransit services. GAO identified potential survey recipients from a list provided by FTA on its Urban Agency CEO Contact list. In early May 2012, an initial email alerting agency contacts to the upcoming web-based survey was sent and about a week later, the web-based survey was also delivered to recipients via email message. The web-based survey questionnaire requested baseline information about service and eligibility processes as well as information related to the cost, demand, and policies and practices transit agencies use to improve provision of ADA paratransit service. To help increase our response rate, we sent two follow-up emails and called agency officials from May through July 2012. The survey was available to transit agency respondents from May 2012 through July 2012. To pretest the questionnaire, we conducted cognitive interviews and held debriefing sessions with five local transit agency officials with knowledge about their ADA paratransit operations. Three pretests were conducted in-person with phone participants while two were conducted solely on the phone. We selected pretest respondents to represent different sizes and locations of transit agencies that provide ADA paratransit service. We conducted these pretest to determine if the questions were burdensome, understandable and measured what we intended. Additionally we asked officials in FTA’s Office of Civil Rights to review the questionnaire based on their expertise and knowledge of the program and interviewed them for their feedback on the survey questionnaire. On the basis of feedback from the pretests and expert review we modified the questions as appropriate. To produce the estimates from this survey, answers from each responding case were weighted in the analysis to account statistically for all the members of the population, including those who were not selected or did not respond to the survey. Estimates produced from this sample are from the population of transit agencies that provided demand response services in the FTA’s 2010 National Transit Database. Because our results are based on a sample and different samples could provide different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (for example, plus or minus 10 percentage points). We are 95 percent confident that each of the confidence intervals in this report include the true values in the study population. Unless we note otherwise, percentage estimates based on all transit agencies have 95 percent confidence intervals of within plus or minus 10 percentage points. Confidence intervals for other estimates are presented along with the estimate where used in the report. In addition to the reported sampling errors, the practical difficulties of conducting any survey may introduce other types of errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond can introduce unwanted variability into the survey results. We included steps in both the data collection and data analysis stages for the purpose of minimizing such nonsampling errors. We took the following steps to increase the response rate: developing the questionnaire, pretesting the questionnaires with transit agencies that provide ADA paratransit service, conducting multiple follow-ups to encourage responses to the survey and contacting respondents to clarify unclear responses. We conducted this performance audit from September 2011 to November 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We conducted an analysis to determine whether ADA paratransit data in the NTD were sufficiently reliable for the purpose of identifying changes that have occurred in ADA paratransit demand and costs since 2007. We examined data on ADA paratransit trips and ADA paratransit expenses from 2005 to 2010 and interviewed FTA officials about the database. We found data discrepancies, such as incomplete data, that may understate or overstate the number of ADA trips and amount of ADA expenses. As a result, we determined that the ADA paratransit data in the NTD were not sufficiently reliable for the purposes of our review. To identify changes that have occurred in ADA paratransit demand and costs since 2007, we examined data from the NTD on the number of ADA paratransit trips provided annually (ADA trips) and total annual expenditures attributable to ADA complementary paratransit requirements (ADA expenses). We examined data for all transit agencies reporting these two data fields from 2005 through 2010, the most recent year of data available at the time of our review. We chose to assess data for 2005 through 2010 because we wanted to identify the extent to which we could report trends in data over this series of years. In addition, we chose to analyze data for these two fields because they are the only two fields related to ADA paratransit in the NTD. We found that the NTD does not contain a data field that asks transit agencies whether they are required to provide ADA paratransit services. To determine whether the NTD data on ADA trips and ADA expenses would be reliable for our purposes, we interviewed FTA officials who are knowledgeable about the design and uses of the NTD data. We also assessed the data’s accuracy and completeness by analyzing the extent to which transit agencies reported these two data fields for all 6 years of interest. In addition, we compared the NTD data to data from our generalizable survey of transit agencies. Our analysis found that about one-third of transit agencies reporting ADA paratransit data did not report these data in all 6 years of data we analyzed. We found that, when analyzing data from transit agencies that reported providing ADA trips in at least one year from 2005 to 2010, about 32 percent of the agencies did not provide data in one or more of the years of interest. Similarly, about 30 percent of transit agencies reporting ADA expenses in at least one year from 2005 to 2010 did not report data for all 6 years of interest (see table 4). Some of the transit agencies that did not report data for all 6 years skipped years of reporting—for instance, an agency might have reported in 2005, 2009, and 2010. Other transit agencies reported data for consecutive years, but not for all of the 6 years—for instance, they reported data in 2005, 2006, and 2007. Since the NTD does not contain a field regarding whether transit agencies are required to provide ADA paratransit services in a particular year, we could not assess whether those transit agencies reporting for fewer than 6 years were in error. In addition, we found that although larger transit agencies were less likely than smaller transit agencies to have missing data, the missing data from larger transit agencies—because they provide more ADA paratransit trips than smaller transit agencies—would probably have a greater impact on the overall data. We could not determine how many of the transit agencies that did not report data in all 6 years should have reported these data, and how many had legitimate reasons for not reporting in all years. FTA officials told us about cases in which transit agencies should report ADA paratransit data to NTD, but fail to do so. They also told us about cases in which valid reasons exist for transit agencies not to report data every year. Transit agencies may receive reporting waivers, for example because of hurricanes or other natural disasters, that make the agencies exempt from reporting any data to NTD. Transit agencies may also introduce or discontinue ADA paratransit services for various reasons, which can lead to the appearance of missing data. It is not possible to tell from the data, however, whether these missing data are because of valid reasons, such as reporting waivers or changes in service or because of a transit agency’s failure to report. In addition, transit agencies may misunderstand the definition of ADA paratransit service and make reporting errors as a result—they may report ADA trips and ADA expenses erroneously one year because they think their specialized, demand-responsive service counts as ADA paratransit service, even though the service is not provided in order to comply with the ADA. When agencies correct the reporting error in subsequent years and do not report these data, it can appear that they have failed to report consistently. According to FTA officials, it is difficult to verify whether transit agencies that report ADA paratransit data are indeed reporting about ADA paratransit services, or whether they are reporting about generic demand-responsive services. Without a field identifying those transit agencies that provide ADA paratransit, we attempted to use another field—those transit agencies that reported providing demand-response service—as a proxy to help determine which transit agencies should and should not report ADA paratransit data. Demand response is a broad service category that includes ADA paratransit service. Our analysis found that in each year from 2005 to 2010, 22 percent to 26 percent of transit agencies that reported providing demand-response service did not report providing ADA trips or having ADA expenses (see table 5). Based on results from our survey of transit agencies, only about 9 percent of transit agencies reported providing demand-response service but not ADA paratransit service—a lower percentage than the 22 to 26 percent that were found to report demand response service but not ADA trips or ADA expenses to the NTD. This suggests that some of the transit agencies reporting demand response service but not ADA trips or ADA expenses do indeed provide ADA paratransit services—and should have reported ADA trips and ADA expenses. We could not determine what effect the non-reporting transit agencies had on the ADA paratransit services data because we could not determine how many transit agencies should have reported, but did not do so; how many had valid reasons for not reporting; and how many may have over-reported based on misunderstanding the definition of ADA trips or ADA expenses. As a result, we determined that the ADA paratransit services data available in NTD were not sufficiently complete and therefore were not reliable for our purposes, which were to provide information on changes in ADA paratransit demand and costs since 2007. In addition to the individual named above, other key contributors to this report were Heather MacLeod, Assistant Director; Robert Alarapon; Dwayne Curry; Kathleen Gilhooly; Timothy Guinane; Delwen Jones; Katherine Killebrew; Luann Moy; Beverly Ross; Sonya Vartivarian; and Betsey Ward.
The ADA, a civil rights law enacted in 1990, provided that it shall be considered discrimination for a public entity that operates a fixed-route transit system to fail to offer paratransit service to disabled individuals that is comparable to services provided to those without disabilities. FTA is responsible for overseeing compliance with ADA requirements for paratransit services. As requested, GAO examined: (1) the extent of compliance with ADA paratransit requirements, (2) changes in ADA paratransit demand and costs since 2007, and (3) actions transit agencies are taking to help address changes in the demand for and costs of ADA paratransit service. GAO analyzed FTA's ADA compliance reports; conducted a generalizable web-based survey of 145 transit agencies; interviewed federal officials; and interviewed officials from 20 transit agencies, chosen based on a variety of characteristics, including geographic diversity. Little is known about the extent of transit agencies' compliance with the Americans with Disabilities Act (ADA) paratransit service requirements. FTA does receive some assurance that agencies are complying with federal statutes and regulations, including ADA paratransit requirements, because transit agencies that receive FTA funding are required to self-certify and assure that they are complying with the Department of Transportation's ADA regulations. Additionally, FTA conducts specialized ADA paratransit compliance reviews that examine multiple aspects of an agency's paratransit services; however, few transit agencies are selected for review each year. FTA generally relies on complaints, media reports, experience with an agency, and other information to select agencies for review, but does not have documented criteria for selecting agencies. This informal selection process does not align with federal guidance on internal controls related to communication, documentation, and monitoring. Lastly, according to FTA officials, all finalized ADA paratransit compliance review reports are to be available on FTA's website, but GAO identified nine final review reports--conducted from 2004 to 2010--that have not been posted to FTA's website. Based on GAO's survey, the demand for ADA paratransit trips increased, since 2007 for some transit agencies, and costs for providing the trips remain high. The average number of annual ADA paratransit trips provided by a transit agency increased 7 percent from 2007 to 2010; from 172,481 trips in 2007 to 184,856 trips in 2010. Increases in demand for ADA paratransit services were driven by the 10 largest transit agencies, measured according to the population size of their service areas. Also, ADA paratransit trips are much more costly to provide than fixed-route trips. Similarly, the average cost of providing an ADA paratransit trip in 2010 was $29.30, an estimated three and a half times more expensive than the average cost of $8.15 to provide a fixed-route trip. The average cost of providing an ADA paratransit trip increased 10 percent from 2007 to 2010. GAO's analysis of ADA paratransit data available in FTA's National Transit Database (NTD) found that, according to GAO standards for data reliability, the data are not sufficiently reliable for the purpose of assessing changes in ADA paratransit demand and costs. For example, GAO found discrepancies, such as incomplete data, that may understate or overstate the number of ADA trips and amount of ADA expenses. According to FTA officials, some transit agencies fail to report these data, while others misunderstand the data fields and make reporting errors as a result. Transit agencies are taking actions such as coordinating with other transportation providers, offering travel training, and improving accessibility to address changes in ADA paratransit demand and costs. According to GAO's survey, about 59 percent of transit agencies are coordinating with health and human services providers to improve ADA paratransit services or address the costs of providing such services. About 44 percent of transit agencies are coordinating with other local transportation providers. Additionally, about 55 percent are using travel training to help paratransit riders' transition to fixed-route services. Furthermore, GAO's survey results showed that over 62 percent of transit agencies have made accessibility improvements to their fixed-route systems since 2007. The Secretary of Transportation should direct the FTA Administrator to (1) document and make publicly available a formal approach for selecting transit agencies for ADA paratransit compliance reviews, (2) post the backlog of ADA's compliance-review final reports and establish a process for the timely posting of future reports, and (3) provide guidance to transit agencies on how to accurately complete existing ADA paratransit data fields in the NTD.
As a result of the District’s financial crisis in 1994, the Congress passed the District of Columbia Financial Responsibility and Management Assistance Act of 1995 (the 1995 Act). The Congress established the Authority to perform the following functions, among others: eliminate budget deficits and cash shortages of the District through visionary financial planning, sound budgeting, accurate revenue forecasts, and careful spending, ensure the most efficient and effective delivery of services, including public safety services, by the District during a period of fiscal emergency, and conduct necessary investigations and studies to determine the fiscal status and operational efficiency of the District. In assuming these responsibilities, the Authority was to ensure that funds were available to meet the District’s obligations to vendors and taxpayers in a timely manner. To accomplish this, the Authority was required by law to establish several escrow accounts separate from the District’s General Fund so that monies could be separately maintained to fund District activities including water and sewer service, public schools, and the University of the District of Columbia. The Authority was established as an entity within the District of Columbia government, with five board members appointed by the President of the United States. The Authority receives a regular, annual appropriation from the general fund of the District of Columbia in fixed amounts. Other appropriations authorizing the use of (1) gifts, bequests, and other contributions and (2) interest earned on escrow accounts maintained by the Authority, are available for an indefinite period. The District of Columbia Management Reform Act of 1997 (Management Reform Act) expanded the Authority’s responsibilities to include the development and implementation of management reform plans. The plans cover the major entities of the District and all departments of the District in the city-wide functions of Asset Management, Information Resource Management, Personnel, and Procurement. The Management Reform Act required that the Authority enter into contracts with consultants to develop plans for the major entities and four city-wide functions and establish management reform teams to implement each plan. These new responsibilities increased the amount of funds being spent by the Authority on behalf of the District. That act also authorized the Authority to spend interest earned on the escrow accounts maintained by the Authority as it considers appropriate to promote the economic stability and management efficiency of the District government. Currently for activities for which the Authority controls the funds provided on behalf of the District, the Authority pays District-related expenses in one of three ways. For contracts originated by the District, the Authority either reimburses the District’s General Fund after District agencies pay vendors, or the Authority pays third parties directly based on District agencies’ submission of a payment request and approved invoice. The third approach involves the Authority using its own contracting authority. For those, it approves the services rendered and pays third parties directly for goods and services provided to District agencies. Our objectives were to determine (1) whether the Authority’s financial information was in the same amounts and consistently presented in the fiscal year 1996 and 1997 audited financial statements of the Authority and the District’s CAFR, (2) why the District’s internal control weakness concerning the Authority was not also included in the audit report on the Authority, (3) the Authority’s use of the escrow accounts’ interest income, (4) the Authority’s purpose for the “Taxable Equipment Lease/Purchase Agreement,” and (5) whether suggestions made to the Authority’s management in our prior letter were implemented. To address these objectives, we reviewed the Authority’s audited financial statements and management letters and the District’s CAFR and Report on Internal Controls and Compliance for fiscal years 1997 and 1996. We also obtained detailed supporting schedules, related documentation, and explanations from Authority officials as we considered necessary. In addition, we obtained and reviewed the specific laws cited and legal interpretations made through discussions with Authority officials. To further support the information provided in the financial statements and management letters, we interviewed and received additional supporting documentation from the external independent auditors of the Authority and the District. We also interviewed the Authority’s Executive Director, Chief Financial Officer, and General Counsel. We conducted our work from April 1998 through August 1998 in accordance with generally accepted government auditing standards. We requested comments from the Authority’s Chairperson on a draft of this report. The Authority’s Executive Director provided us with written comments, which are discussed in the “Authority’s Comments and Our Evaluation” section and are reprinted in appendix II. The Authority’s financial information as reported in its financial statements is consistent with the Authority’s financial information presented in the District’s CAFR. The fiscal years 1996 and 1997 financial statements of the Authority were included in the District’s CAFR, as a component unit, as required by GASB, and both the Authority and the District presented the Authority’s financial activities in accordance with GASB. As a result of the widely different annual revenue amounts for the Authority ($8.6 million for fiscal year 1997) and the District ($5.2 billion in fiscal year 1997), the Authority’s account balances, which represent less than .2 percent of the District’s revenue, are summarized in the District’s CAFR instead of being reported in detail, as in the Authority’s financial statements. For example, several account line items (Due from District-Management Reform, Other Receivables from the District, and Advances from the District) on the Authority’s financial statements were combined, identified by a different name, and rounded to the nearest $1,000 when incorporated into one account (Interfund Account) in the District’s CAFR. However, the total dollar amounts reported by both the Authority and the District were the same. In addition, because the Authority and the District are different reporting entities, there were appropriately some differences in their presentation and classification of accounts. For example, several account balances (Government Appropriation, Interest Transferred from Escrow Accounts, and Other Income) that the Authority presented as “revenue” were presented as Interfund Transfers-In, an “other financing source” in the District’s CAFR. Further, the Sale of Fixed Asset amount was presented as “revenue” by the Authority and as an “other financing sources—proceed” in the District’s CAFR. In the District’s auditors’ Reports on Internal Controls and Compliance for fiscal years 1996 and 1997, an internal control weakness was identified concerning controls over financial reporting involving the Authority’s transactions that relate to the District. The material weakness related to a lack of communication between the District and the Authority when transactions involve funds that are held by the Authority on behalf of the District. No finding on this issue was reported by the Authority’s auditors nor would such a finding be expected since this internal control weakness does not affect the Authority’s financial operations. The District’s auditors reported that the District’s Office of Finance and Treasury did not have complete records of the District funds that are maintained by the Authority in escrow accounts and could not regularly reconcile its balances for those accounts with the Authority’s recorded balances. The auditors also cited the following specific reasons for the above-reported internal control weakness. The Authority did not promptly notify or provide the necessary documentation to the District of the specific details regarding financial activity that it incurred on behalf of the District. The Authority issued the management reform contracts without promptly notifying the District of the financial activity to allow for the prompt recording of the related transactions. The District and the Authority had not developed procedures to promptly notify each other of amounts anticipated or actually received by the Authority on behalf of the District. The District’s auditors recommended that the District and the Authority jointly develop procedures that would result in the Authority providing to the District the kind of monthly financial information needed for the District to perform a comprehensive reconciliation. They further stated that such information should include the monthly balances and the financial activity for each individual escrow account maintained by the Authority on behalf of the District. It was also recommended that the Authority and District develop procedures that provide for dual notification of activities involving donations and contracts administered by the Authority for the District. The Authority’s auditors stated, and we agree, that this material weakness did not affect the Authority’s internal controls related to preparation of its financial statements. Authority officials added their view that the problems cited in the District auditors’ report resulted from an internal control weakness within the District agencies, and not within the Authority, for the following reasons. The Authority did not originally notify the District of the management reform contracts and their cost since the Authority originally intended to pay for those studies from its available funds. However, the documentation of the contract and costs to date were provided to the District agencies once a decision was made by the Authority to have the agencies reimburse the Authority for these costs. The U.S. Department of the Treasury or the District’s Office of Treasury is responsible for notifying District agencies of cash receipts held on their behalf by the Authority for the issuance of general obligation bonds or receipt of the District’s annual appropriation. District agencies should be responsible for recording expenditures when they approve amounts for payment, and prior to their submission to the Authority for payment from escrow accounts. The reasons cited for the Authority’s disagreement with the District’s auditors’ findings are valid for transactions initiated and approved by the District. However, as described in the earlier “Background” section of this report, when transactions are initiated by the Authority, that data would not necessarily be concurrently available for the District. As a result, implementation of the District auditors’ recommendations that the Authority provide monthly information to the District and that the two entities provide dual notification on activities involving Authority-administered contracts and donations is practical and necessary. Effective implementation of these recommendations would improve the District’s controls over cash by enabling it to promptly report and reconcile all financial activity. Section 106(d) of the 1995 Act authorizes the Authority to expend any amounts derived from interest income on accounts held by the Authority on behalf of the District for such purposes as it considers appropriate to promote the economic stability and management efficiency of the District government. In fiscal years 1997 and 1996, the escrow accounts earned interest of $9.8 million and $5.5 million, respectively. The Authority used $5 million of the interest income during fiscal year 1997, and the escrow accounts contained $10.3 million in accumulated interest as of September 30, 1997 (see table 1). The Management Reform Act required the Authority to contract with consultants to perform preliminary studies and reviews of District agencies so that recommendations could be made on the nature of the reform required at each agency. Of the $2.1 million paid in fiscal year 1997, the largest portion, about $1.3 million, was used for an ongoing contract to conduct a comprehensive study and make recommendations on the Metropolitan Police Department’s organization and operation. The remaining contractor payments of about $800,000 were for various operational reviews of the University of the District of Columbia, Public Schools, and other agencies. At September 30, 1997, the Authority held almost $1.8 million that had been transferred from interest earned on the Federal Payment Fund escrow account. The Authority initially intended to pay for management reform consulting expenses using the transferred amount. However, before the end of the fiscal year, the Authority decided to have the affected District agencies pay for the consulting expenses. Accordingly, it established an amount due back to the Federal Payment Fund escrow account and authorized the bank to return the almost $1.8 million to the escrow account. Authority officials stated that the amount was returned in October 1997. The Authority also paid $476,700 to cover the District’s share of medicaid payments and used the remaining $734,471 to pay its actual operating expenses in excess of budgeted amounts. This included a $478,000 increase in personnel costs for fiscal year 1997 that resulted from (1) hiring additional employees, (2) giving pay raises totaling $120,000 to 24 employees, and (3) paying $24,500 for lump sum retroactive locality pay adjustments for fiscal years 1995 and 1996. On September 30, 1997, the Authority borrowed from a bank $300,000 secured by (1) a lien on personal property (furniture and equipment) acquired by the Authority during fiscal years 1996 and 1997 and (2) a pledge of a $300,000 certificate of deposit (CD) purchased from the bank.Under the agreement, title to the property, with a book value of $271,770, and the interest earned on the CD, vests in the bank should the Authority default on the repayment of the loan. In addition, in the event of default, the bank is given the right to the funds on deposit in the CD to satisfy the Authority’s obligation. The District can pay off the debt at any time without fines or penalties for early prepayment. The agreement calls for the Authority to make 12 quarterly repayments of $25,000 totaling $300,000, from January 1, 1998, to October 1, 2000. The Authority is also required to pay $15,487 in interest during the first 4 quarters of the agreement’s term to cover the first year’s interest. Interest expenses for years 2 and 3 of the agreement will be determined in accordance with the terms of the agreement. It stipulates that the interest rate on the debt accrues at the rate of 50 basis points in excess of the interest earned on the CD pledged as security. The interest rate on the CD is subject to annual adjustments. Authority officials stated that the purpose of the agreement was to obtain needed financing by recovering the net cost of assets acquired with fiscal year 1997 and 1996 funds and spreading the cost over a 3-year period and to free-up budget capacity (budget authority). After looking at the economic benefit of the transaction and analyzing the Authority’s cash on hand and other account balances as of September 30, 1997, and analyzing the transaction’s future impact, we concluded that there was not an economic need for the Authority to enter into this transaction. Although the transaction resulted in an increase in the Authority’s fiscal year 1997 surplus, it had an overall negative economic impact by creating a net additional cost of $3,488 over the term of the agreement (interest payments of $27,863 versus interest earned on the pledged certificate of deposit of $24,375). In addition, when the Authority entered into this agreement, it pledged $300,000 of existing cash to the bank (which placed the cash in a restricted account) in order to receive the same amount of funds, resulting in no increase in available cash, had an accumulated surplus of $444,982 at the beginning of fiscal year 1997, already had a $253,000 surplus for fiscal year 1997 from general operations, and had sufficient cash on hand to meet its current liabilities, had access to $10.3 million of escrow account interest as of September 30, 1997, which was available for District operations, and created the need to repay $303,488 over the term of the loan using future appropriations or escrow account interest. Repaying this debt would (1) save the Authority more than $1,100 in net interest to be paid over the next 2 years, (2) remove restrictions on the outstanding amount of $200,000 currently being held in a certificate of deposit, and (3) eliminate the need for further administration of the agreement. Our May 23, 1997, letter identified seven opportunities to improve the Authority’s future financial statements. The Authority implemented all of our prior report’s suggestions, except for the inclusion of an MD&A section as part of its audited financial statements (see appendix I). In our 1997 letter, we suggested to the Authority that, although it is not a current reporting requirement for state and local government entities, including a MD&A section could enhance the Authority’s financial statements. Authority officials stated that they provide a separate annual report on their progress and accomplishments to the Congress, as required under Section 224 of the 1995 Act, and that audited financial statements under GASB are not required to address the Authority’s performance and accomplishments. They suggested that including the same information in its financial statements is unnecessary. Federal agencies that prepare financial statements under the Chief Financial Officer Act of 1990 (the CFO Act) and publicly held private sector corporations regulated by the Securities and Exchange Commission (SEC) include as part of their financial statements an overview of the reporting entity, which is similar to an MD&A section. In addition, the Federal Accounting Standards Advisory Board (FASAB) and GASB have issued exposure drafts that will expand the use of MD&A. An MD&A section presents information based on the results of an analytical review of relevant financial and performance data of the programs, activities, and funds that make up the reporting entity. An MD&A section would enhance the Authority’s financial statements since it is an important vehicle for (1) communicating managers’ insights about the reporting entity, (2) increasing the understandability and usefulness of the financial statements, and (3) providing understandable and accessible information about the entity and its operations, successes, challenges, and future. As of September 30, 1997, the same financial activity for the Authority for fiscal years 1997 and 1996 was reported and presented properly in the Authority’s financial statements and the District’s CAFR. We agree with the District’s auditors that if the District received from the Authority more prompt and detailed information regarding monthly balances and financial activity, improved controls over cash and improved communication between the two entities would result. We continue to believe that our prior suggestion that the Authority include an MD&A section in its audited financial statements is needed and would enhance its financial statements. The Authority has made the required payments on the “Taxable Equipment Lease/Purchase Agreement” through September 30, 1998. At this time, the transaction has 2 years to run and we see no economic benefit for the Authority in continuing with it. The Authority has the ability to pay off the loan by using some of the $10.3 million in interest income from escrow accounts. In commenting on a draft of the report, the Authority disagreed with sections in our report concerning the lack of communication between the Authority and the District when transactions involve funds that are held by the Authority on behalf of the District, the Authority’s rights and economic benefits resulting from the agreement called “Taxable Equipment Lease/Purchase Agreement,” and our suggestions to enhance the Authority’s financial statements with an MD&A section. In addition, the Authority took exception to a previously issued GAO legal opinion that was referred to in a footnote to this report regarding the Authority’s compliance with pay rate limits provided in the 1995 Act. The Authority stated that it disagreed with the District auditors’ statement that the Authority did not notify the District in a timely manner of specific details regarding expenditures. The Authority’s basis for disagreement is that the District incurs expenditures, and not the Authority. The Authority, however, does incur expenditures not only when it initiates payments made for transactions incurred by the District, but also for transactions it initiates on behalf of the District. As such, the District auditors’ noted that for these types of transactions the District did not have complete records of its funds maintained by the Authority in escrow accounts and could not regularly reconcile its balances. The Authority also stated that it is a temporary entity and that it is appropriate to hold the District’s Office of the CFO responsible for tracking and reconciling its revenues and expenditures, regardless of where funds may be held. Even though it is temporary in nature, until the Authority no longer exists it has a fiduciary responsibility to provide the necessary documentation in a timely manner to the District CFO to ensure that the District’s records are adequately maintained, especially in those cases where it initiates payments on behalf of the District. The Authority took exception with our statement that the transaction provided in the “Taxable Equipment Lease/Purchase Agreement” between the Authority and a bank is in substance a secured loan. We believe our description of this transaction is accurate for several reasons. First, while the Authority stated that our view of the transaction failed to recognize that the equipment “was sold back to the bank,” the Authority also stated that the bank “only has a lien against the equipment.” If the Authority sold the equipment to the bank, thereby making the bank the equipment’s owner, then the bank would not have needed to have a lien against equipment it owned when it leased the equipment to the Authority. Second, the Authority’s statement that the equipment was sold to the bank is inconsistent with the agreement. Section 10 of the agreement states that title to the equipment is deemed to be with the Authority unless the Authority defaults on its obligation under the agreement. Section 21 of the agreement provides that the bank’s security interest in the equipment ends and the Authority’s title is free and clear of all encumbrances when the Authority satisfies its obligations under the agreement. These and other provisions of the agreement establish that the transaction was a secured loan. While the Authority states that our view of the transaction is contrary to the legal position of both the Authority and the bank’s counsel, the Authority did not respond to our requests for the legal analysis of either its or the bank’s counsel. During our review, the Authority staff advised us that the transaction and use of the proceeds was entered into pursuant to section 103(g) of the 1995 Act, authorizing the Executive Director to enter into such contracts as the Executive Director considers appropriate (subject to approval of the chair) to carry out the Authority’s responsibilities under the act. The general grant of authority to contract does not authorize an entity to borrow and spend the proceeds. Without explicit authority to borrow—and we are not aware of any such authority in this case—the Authority’s borrowing and use of the proceeds was an improper augmentation of its appropriation. In addition, the Authority stated that the transaction sets an example for the District government because it leveraged scarce operating revenues. However, the Authority had a $253,000 surplus for fiscal year 1997, a $444,982 accumulated surplus carried forward from prior years, and access to more than $10 million of interest earned on escrow accounts. Further, it was not leveraging resources since it had to pledge $300,000 of existing cash, which was placed into a restricted account, in order to receive the same amount. The Authority also stated that our analysis unfairly focused upon the economic benefit of the transaction. As discussed in our report, the transaction did not generate additional cash and resulted in a net cost to the District with no apparent benefit, financial or nonfinancial. Thus, entering into such a transaction without a sound reason, economic or otherwise, is not a good example for the District government to emulate. The Authority stated that the inclusion of an MD&A section in the financial statements is unnecessary, time consuming, and redundant. OMB and SEC have already recognized the usefulness of an MD&A section in the financial statements of federal entities and private sector companies, respectively. GASB also recognizes the importance of an MD&A section for state and local government entities as demonstrated in its exposure draft on “Basic Financial Statements—and Management’s Discussion and Analysis—for State and Local Government,” dated January 31, 1997. Currently, the Authority prepares another report with the same types of information that can be used in an MD&A section. Thus, utilizing information already available would not be time consuming and, as stated in our report, would enhance the understandability and usefulness of the Authority’s financial statements. Finally, the Authority took exception to our legal opinion (B-279095.2) issued on June 16, 1998 relating to its compliance with the rate of basic pay to senior executives. At the time we prepared our opinion, we were aware of the Authority’s argument, which was included in attachments to its November 2, 1998 response commenting on a draft of this report, but we concluded that the language of section 102 of the 1995 Act does not permit the Authority’s staff to be paid at rates that exceed the pay limitation. In addition, the Congress specifically stipulated in the Authority’s fiscal year 1999 appropriation that funds provided to the Authority may not be used to pay “any compensation of the Executive Director or General Counsel of the Authority at a rate in excess of the maximum rate of compensation which may be paid to such individual during fiscal year 1999 under section 102 of as determined by the Comptroller General (as described in GAO legal opinion B-279095.2).” We have evaluated the Authority’s technical suggestions and have incorporated them as appropriate. In addition, the Authority provided attachments to its response regarding it correspondence with congressional committees on the Authority’s compliance with rate of basic pay. We have considered these attachments in our evaluation. However, these attachments are not included in the report. We are sending copies of this report to the Ranking Minority Member of your Subcommittee and the Chairmen and Ranking Minority Members of the Subcommittee of the District of Columbia, Senate Committee on Appropriations; Subcommittee on Oversight of Government Management, Restructuring and the District of Columbia, Senate Committee on Governmental Affairs; and Subcommittee on the District of Columbia, House Committee on Government Reform and Oversight. We are also sending a copy to the Chairperson, District of Columbia Financial Responsibility and Management Assistance Authority. Copies will be made available to others upon request. Major contributors to this report are listed in appendix III. If you or your staff have any questions, please contact me at (202) 512-4476 or Hodge Herry, Assistant Director, at (202) 512-9469. Include a Management Discussion and Analysis (MD&A) section to enhance the annual report. Clearly label and describe (1) the Agency Funds’ separate statement, (2) what the information represents, and (3) how it relates to the Authority’s financial statements, in the notes to the financial statements. Define the actual, actual (budgetary basis), and budgeted reporting bases used in the FY 1996 Combined Statement of Revenues, Expenditures, and Changes in Fund Balance. Delete reference to Propriety Fund in Note 2 since none were reported. Revise Note 2 to refer to the Combined Statement of Revenues, Expenditures, and Changes in Fund Balance and discuss that no encumbrances were reported for fiscal year 1996. Include more detailed and useful information on the types of reimbursement due from the District in Note 3. Explain in Note 5 that fixed assets are reported on the Combined Balance Sheet at their net value and depreciation is not reported on the Statement of Revenues, Expenditures, and Changes in Fund Balance in accordance with governmental accounting standards. The following are GAO’s comments on the letter from the Executive Director of the District of Columbia Financial Responsibility and Management Assistance Authority dated November 2, 1998. 1. We revised the report as appropriate. 2. Our report did not state that the Authority used $734,471 to give pay raises and lump sum retroactive pay adjustments. Our report properly states that these payments were part of the Authority’s expenditures in excess of budgeted amounts. 3. Our report did not state that the use of interest earnings took place in fiscal year 1996. Our report properly states that the Authority was authorized to use interest income on all escrow accounts with the passage of the Management Reform Act and retroactively applied the interest earnings to its excess expenditures during fiscal year 1997. 4. The Authority stated that its role and function have increased from its inception without an increase to the Authority’s appropriated budget. While it is true that the Authority’s responsibilities have increased, the Congress also provided the Authority with additional sources of financing that could be used for the increased responsibility. In fiscal year 1997, the Congress, in the Management Reform Act, provided the Authority with access to the interest earned on all escrow accounts held on behalf of the District. 5. The Authority’s statements that neither the CD nor the interest is pledged to the bank is inconsistent with provisions of the “Taxable Equipment Lease/Purchase Agreement” and related documents. Section 10 of the agreement states that the Authority’s obligation under the agreement shall be secured by a Deposit Pledge Agreement under which the Authority will pledge to the bank a CD representing $300,000 on deposit with the bank. Section 2.1 of the Deposit Pledge Agreement provides that the Authority pledge a continuing lien and security interest in the (a) CD, (b) all money and funds on deposit pursuant to, or represented by, the CD, and (c) all rights for payment of the CD and all interest payable by reason of the CD. Finally, section 4.1 of the Deposit Pledge Agreement provides that the Authority’s failure to pay the amount owed to the bank entitles the bank to the CD, related cash, and unpaid interest to satisfy the Authority’s obligation to the bank. 6. The draft report provided to the Authority for formal comment on October 21, 1998, did not include any recommendations. 7. We revised the report to reflect the District’s current functional realignment. Richard Cambosos, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO compared the audited financial statements and management letters of the District of Columbia Financial Responsibility and Management Assistance Authority for fiscal years (FY) 1996 and 1997 to the District's Comprehensive Annual Financial Report (CAFR) to determine: (1) whether there was agreement of amounts and consistency of presentation regarding the Authority's financial information; and (2) why the District's internal control weakness that relates to the Authority was not identified in the audit report on the Authority's financial statements. GAO also provided information on the: (1) Authority's use of interest income from escrow accounts established on behalf of the District; (2) Authority's purpose for the transaction entitled Taxable Equipment Lease/Purchase Agreement; and (3) status of the Authority's implementation of GAO's suggestions for its financial statements for fiscal years 1995 and 1996. GAO noted that: (1) the Authority's audited financial statements and the District's audited CAFR for fiscal years 1996 and 1997 revealed that the financial statements included the same amounts for Authority operations; (2) the presentation and categorization of the Authority's amounts were in accordance with the appropriate sections of the Government Accounting Standards Board accounting principles for both sets of financial statements; (3) in the District's auditors' report on internal controls and compliance for FY 1997, they identified a material weakness concerning financial reporting controls over transactions involving the Authority; (4) the District's auditors recommended that the Authority, along with the District, implement procedures to provide monthly balances and the related support for all financial activity each month on behalf of the District; (5) the Authority's auditors stated that this weakness did not affect the Authority's internal controls over financial reporting; (6) while Authority officials stated their belief that there was sufficient documentation available within the District to record financial activity on its books, the Authority's role in District operations and the District's dependence on the Authority for data on certain transactions and balances would necessitate effective communication of financial activity between the two entities; (7) since the Authority established the escrow accounts on behalf of the District, the accounts have earned interest income of $9.8 million and $5.5 million for fiscal years 1997 and 1996; (8) during FY 1997, $5 million was paid directly to vendors, transferred from an escrow account, or used to finance the Authority's operations; (9) the Authority entered into an agreement, entitled Taxable Equipment Lease/Purchase Agreement; (10) Authority officials stated that the purpose of the agreement was to obtain needed financing and to free-up budget capacity; (11) GAO identified seven opportunities for improving the Authority's future financial statements; (12) the Authority has implemented six of GAO's seven suggestions; (13) the one exception was the inclusion of a Management Discussion and Analysis (MD&A) section as part of its audited financial statements; and (14) with the concept of MD&A expanding across all governmental entities and presently a requirement in the federal government and for publicly-traded private sector corporations, GAO believes that including a MD&A section in the Authority's audited financial statements is needed and would enhance its financial statements.
EPA is required by the Clean Air Act to conduct reviews of the National Ambient Air Quality Standards (NAAQS) for the six criteria pollutants, including particulate matter, every 5 years to determine whether the current standards are sufficient to protect public health, with an adequate margin of safety. If EPA decides to revise the NAAQS, the agency proposes changes to the standards and estimates the costs and benefits expected from the revisions in an assessment called a regulatory impact analysis. In January 2006, EPA prepared a regulatory impact analysis for one such rule—particulate matter—that presented limited estimates of the costs and benefits expected to result from the proposed particulate matter rule. EPA developed the estimates by, for example, quantifying the changes in the number of deaths and illnesses in five urban areas that are likely to result from the proposed rule. The National Academies’ 2002 report examined how EPA estimates the health benefits of its proposed air regulations and emphasized the need for EPA to account for uncertainties and maintain transparency in the course of conducting benefit analyses. Identifying and accounting for uncertainties in these analyses can help decision makers evaluate the likelihood that certain regulatory decisions will achieve the estimated benefits. Transparency is important because it enables the public and relevant decision makers to see clearly how EPA arrived at its estimates and conclusions. Many of the recommendations include qualifying language indicating that it is reasonable to expect that they can be applied in stages, over time; moreover, a number of the recommendations are interrelated and, in some cases, overlapping. Soon after the National Academies issued its report, EPA roughly approximated the time and resource requirements to respond to the recommendations, identifying those the agency could address within 2 or 3 years and those that would take longer. According to EPA officials, the agency focused primarily on the numerous recommendations related to analyzing uncertainty. As is discussed below, EPA applied some of these recommendations to the particulate matter analysis. EPA applied—either wholly or in part—approximately two-thirds of the Academies’ recommendations in preparing its January 2006 particulate matter regulatory impact analysis and continues to address the recommendations through ongoing research and development. According to EPA, the agency intends to address some of the remaining recommendations in the final rule and has undertaken research and development to address others. The January 2006 regulatory impact analysis on particulate matter represents a snapshot of an ongoing EPA effort to respond to the National Academies’ recommendations on developing estimates of health benefits for air pollution regulations. Specifically, the agency applied, at least in part, approximately two-thirds of the recommendations—8 were applied and 14 were partially applied—by taking steps toward conducting a more rigorous assessment of uncertainty by, for example, evaluating the different assumptions about the link between human exposure to particulate matter and health effects and discussing sources of uncertainty not included in the benefit estimates. According to EPA officials, the agency focused much of its time and resources on the recommendations related to uncertainty. In particular, one overarching recommendation suggests that EPA take steps toward conducting a formal, comprehensive uncertainty analysis—the systematic application of mathematical techniques, such as Monte Carlo simulation—and include the uncertainty analysis in the regulatory impact analysis to provide a “more realistic depiction of the overall uncertainty” in EPA’s estimates of the benefits. Overall, the uncertainty recommendations call for EPA to determine (1) which sources of uncertainties have the greatest effect on benefit estimates and (2) the degree to which the uncertainties affect the estimates by specifying a range of estimates and the likelihood of attaining them. In response, EPA examined a key source of uncertainty—its assumption about the causal link between exposure to particulate matter and premature death—and presented a range of expected reductions in death rates. EPA based these ranges on expert opinion systematically gathered in a multiphased pilot project. The agency did not, however, incorporate these ranges into its benefit estimates as the National Academies had recommended. Moreover, the Academies recommended that EPA’s benefit analysis reflect how the benefit estimates would vary in light of multiple uncertainties. In addition to the uncertainty underlying the causal link between exposure and premature death, other key uncertainties can influence the estimates. For example, there is uncertainty about the effects of the age and health status of people exposed to particulate matter, the varying composition of particulate matter, and the measurements of actual exposure to particulate matter. EPA’s health benefit analysis, however, does not account for these key uncertainties by specifying a range of estimates and the likelihood of attaining them. For these reasons, EPA’s responses reflect a partial application of the Academies’ recommendation. In addition, the Academies recommended that EPA both continue to conduct sensitivity analyses on sources of uncertainty and expand these analyses. In the particulate matter regulatory impact analysis, EPA included a new sensitivity analysis regarding assumptions about thresholds, or levels below which those exposed to particulate matter are not at risk of experiencing harmful effects. EPA has assumed no threshold level exists—that is, any exposure poses potential health risks. Some experts have suggested that different thresholds may exist, and the National Academies recommended that EPA determine how changing its assumption—that no threshold exists—would influence the estimates. The sensitivity analysis EPA provided in the regulatory impact analysis examined how its estimates of expected health benefits would change assuming varying thresholds. In response to another recommendation by the National Academies, EPA identified some of the sources of uncertainty that are not reflected in its benefit estimates. For example, EPA’s regulatory impact analysis disclosed that its benefit estimates do not reflect the uncertainty associated with future year projections of particulate matter emissions. EPA presented a qualitative description about emissions uncertainty, elaborating on technical reasons—such as the limited information about the effectiveness of particulate matter control programs—why the analysis likely underestimates future emissions levels. EPA did not apply the remaining 12 recommendations to the analysis for various reasons. Agency officials viewed most of these recommendations as relevant to its health benefit analyses and, citing the need for additional research and development, emphasized the agency’s commitment to continue to respond to the recommendations. EPA has undertaken research and development to respond to some of these recommendations but, according to agency officials, did not apply them to the analysis because the agency had not made sufficient progress. For example, EPA is in the process of responding to a recommendation involving the relative toxicity of components of particulate matter, an emerging area of research that has the potential to influence EPA’s regulatory decisions in the future. Hypothetically, the agency could refine national air quality standards to address the potentially varying health consequences associated with different components of particulate matter. The National Academies recommended that EPA strengthen its benefit analyses by evaluating a range of alternative assumptions regarding relative toxicity and incorporate these assumptions into sensitivity or uncertainty analyses as more data become available. EPA did not believe the state of scientific knowledge on relative toxicity was sufficiently developed at the time it prepared the draft regulatory impact analysis to include this kind of analysis. In a separate report issued in 2004, the National Academies noted that technical challenges have impeded research progress on relative toxicity but nonetheless identified this issue as a priority research topic. The Clean Air Scientific Advisory Committee also noted the need for more research and concluded in 2005 that not enough data are available to base the particulate matter standards on composition. The Office of Management and Budget, however, encouraged EPA in 2006 to conduct a sensitivity analysis on relative toxicity and referred the agency to a sensitivity analysis on relative toxicity funded by the European Commission. We found that EPA is sponsoring research on the relative toxicity of particulate matter components. For example, EPA is supporting long-term research on this issue through its intramural research program and is also funding research through its five Particulate Matter Research Centers and the Health Effects Institute. In addition, an EPA contractor has begun to investigate methods for conducting a formal analysis that would consider sources of uncertainty, including relative toxicity. To date, the contractor has created a model to assess whether and how much these sources of uncertainty may affect benefit estimates in one urban area. Agency officials told us, however, that this work was not sufficiently developed to include in the final particulate matter analysis, which it says will present benefits on a national scale. Another recommendation that EPA did not apply to the particulate matter analysis focused on assessing the uncertainty of particulate matter emissions. The National Academies recommended that EPA conduct a formal analysis to characterize the uncertainty of its emissions estimates, which serve as the basis for its benefit estimates. While the agency is investigating ways to assess or characterize this uncertainty, EPA did not conduct a formal uncertainty analysis for particulate matter emissions for the draft regulatory impact analysis because of data limitations. These limitations stem largely from the source of emissions data, the National Emissions Inventory—an amalgamation of data from a variety of entities, including state and local air agencies, tribes, and industry. According to EPA, these entities use different methods to collect data, which have different implications for how to characterize the uncertainty. EPA officials stated that the agency needs much more time to address this data limitation and to resolve other technical challenges of such an analysis. While the final particulate matter analysis will not include a formal assessment of uncertainty about emissions levels, EPA officials noted that the final analysis will demonstrate steps toward this recommendation by presenting emissions data according to the level emitted by the different kinds of sources, such as utilities, cars, and trucks. Finally, EPA did not apply a recommendation concerning the transparency of its benefit estimation process to the particulate matter analysis. Specifically, the National Academies recommended that EPA clearly summarize the key elements of the benefit analysis in an executive summary that includes a table that lists and briefly describes the regulatory options for which EPA estimated the benefits, the assumptions that had a substantial impact on the benefit estimates, and the health benefits evaluated. EPA did not, however, present a summary table as called for by the recommendation or summarize the benefits in the executive summary. EPA stated in the regulatory impact analysis that the agency decided not to present the benefit estimates in the executive summary because they were too uncertain. Agency officials told us that the agency could not resolve some significant data limitations before issuing the draft regulatory impact analysis in January 2006 but that EPA has resolved some of these data challenges. For example, EPA officials said they have obtained more robust data on anticipated strategies for reducing emissions, which will affect the estimates of benefits. The officials also said that EPA intends to include in the executive summary of the regulatory impact analysis supporting the final rule a summary table that describes key analytical information. While EPA officials said that the final regulatory impact analysis on particulate matter will reflect further responsiveness to the Academies’ recommendations, continued commitment and dedication of resources will be needed if EPA is to fully implement the improvements recommended by the National Academies. In particular, the agency will need to ensure that it allocates resources to needed research on emerging issues, such as the relative toxicity of particulate matter components, and to assessing which sources of uncertainty have the greatest influence on benefit estimates. The uncertainty of the agency’s estimates of health benefits in the draft regulatory impact analysis for particulate matter underscores the importance of uncertainty analysis that can enable decision makers and the public to better evaluate the basis for EPA’s air regulations. While EPA officials said they expect to reduce the uncertainties associated with the health benefit estimates in the final particulate matter analysis, a robust uncertainty analysis of the remaining uncertainties will nonetheless be important for decision makers and the public to understand the likelihood of attaining the estimated health benefits. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or other Members of the Committee may have. For further information about this testimony, please contact me at (202) 512-3841 or stephensonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Christine Fishkin, Assistant Director; Kate Cardamone; Nancy Crothers; Cindy Gilbert; Tim Guinane; Karen Keegan; Jessica Lemke; and Meaghan K. Marshall. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Scientific evidence links exposure to particulate matter--a widespread form of air pollution--to serious health problems, including asthma and premature death. Under the Clean Air Act, the Environmental Protection Agency (EPA) periodically reviews the appropriate air quality level at which to set national standards to protect the public against the health effects of six pollutants, including particulate matter. EPA proposed revisions to the particulate matter standards in January 2006 and issued a regulatory impact analysis of the revisions' expected costs and benefits. The estimated benefits of air pollution regulations have been controversial in the past, and a 2002 National Academies report to EPA made recommendations aimed at improving the estimates for particulate matter and other air pollution regulations. This testimony is based on GAO's July 2006 report Particulate Matter: EPA Has Started to Address the National Academies' Recommendations on Estimating Health Benefits, but More Progress Is Needed (GAO-06-780). GAO determined whether and how EPA applied the National Academies' recommendations in its estimates of the health benefits expected from the January 2006 proposed revisions to the particulate matter standards. While the National Academies' report generally supported EPA's approach to estimating the health benefits of its proposed air pollution regulations, it included 34 recommendations for improvements. EPA has begun to change the way it conducts and presents its analyses of health benefits in response to the National Academies' recommendations. For its particulate matter health benefit analysis, EPA applied, at least in part, about two-thirds of the Academies' recommendations. Specifically, EPA applied 8 and partially applied 14. For example, in response to the Academies' recommendations, EPA evaluated how benefits might change given alternative assumptions and discussed sources of uncertainty not included in the benefit estimates. Although EPA applied an alternative technique for evaluating one key uncertainty--the causal link between exposure to particulate matter and premature death--the health benefit analysis did not assess how the benefit estimates would vary in light of other key uncertainties, as the Academies had recommended. Consequently, EPA's response represents a partial application of the recommendation. Agency officials said that ongoing research and development efforts will allow EPA to gradually make more progress in applying this and other recommendations to future analyses. EPA did not apply the remaining 12 recommendations to the analysis, such as the recommendation to evaluate the impact of using the assumption that the components of particulate matter are equally toxic. EPA officials viewed most of these 12 recommendations as relevant to the health benefit analyses but noted that the agency was not ready to apply specific recommendations because of, among other things, the need to overcome technical challenges stemming from limitations in the state of available science. For example, EPA did not believe that the state of scientific knowledge on the relative toxicity of particulate matter components was sufficiently developed to include it in the January 2006 regulatory impact analysis. The agency is sponsoring research on this issue. We note that continued commitment and dedication of resources will be needed if EPA is to fully implement the improvements recommended by the National Academies. In particular, the agency will need to ensure that it allocates resources to needed research on emerging issues, such as the relative toxicity of particulate matter components, and to assessing which sources of uncertainty have the greatest influence on benefit estimates. While EPA officials said they expect to reduce the uncertainties associated with the health benefit estimates in the final particulate matter analysis, a robust uncertainty analysis of the remaining uncertainties will nonetheless be important for decision makers and the public to understand the likelihood of attaining the estimated health benefits.
SSA’s DI program provides cash benefits to individuals with disabilities, and paid $144 billion to 10.8 million beneficiaries and their families in fiscal year 2015. Individuals are generally considered to have a disability if (1) they cannot perform work that they did before and cannot adjust to other work because of their medical condition(s); and (2) their disability has lasted or is expected to last at least 1 year, or is expected to result in death. Further, individuals must have worked and paid into the program for a minimum period of time to qualify for DI benefits. DI overpayments occur when beneficiaries are paid more than they should be for a given period of time. We previously found that more than half of all DI overpayments are paid to beneficiaries earning above program limits. Overpayments may also result if SSA does not cease benefit payments when notified by a beneficiary of a change in work status, when inaccurate information and administrative errors lead to incorrectly calculated benefits, or as the result of individuals knowingly misleading the agency or committing fraud. As of September 30, 2015, approximately 637,000 individuals owed about $6.3 billion to SSA in DI overpayment debt. SSA will seek repayment of most overpaid benefits after pursuing various procedural steps. Specifically, when SSA detects an overpayment, it requests a full immediate refund, unless the overpayment can be withheld from the beneficiary’s next monthly benefit. SSA also notifies the overpaid person that they may request reconsideration, a waiver, or both. A beneficiary requests reconsideration when he or she disputes that an overpayment occurred or the amount of the overpayment, and requests a waiver when asserting that he or she is neither responsible for the overpayment nor capable of repaying it. SSA may grant a waiver request if it finds that the beneficiary was not at fault for the overpayment and that recovering the overpayment would defeat the purpose of the program or be against good conscience and equity. A waiver permanently terminates collection of a debt. If SSA denies a reconsideration, a waiver, or both, the agency will request full repayment. SSA will attempt to withhold SSA benefits from the beneficiary to immediately recoup the full amount. If the individual is not receiving SSA benefits at the time or is unable to immediately pay the full amount owed, the agency generally requests a repayment plan. This may take the form of voluntary remittances or withholding from monthly SSA benefits. These withholdings may be taken from DI or other SSA benefits being received, such as Supplemental Security Income (SSI) benefits. Withholding from other SSA benefits is known as cross-program recovery. SSA policy is to obtain repayment within 36 months, but it may approve longer repayment periods after reviewing an individual’s income, expenses, and assets. SSA regulations require a minimum monthly DI withholding of $10, an amount that has not changed since 1960 according to SSA. SSA’s policy is to stop its collection activities and temporarily write off a debt if it meets at least one of these criteria: the debtor cannot or will not repay the debt, the debtor cannot be located after a diligent search, the cost of collection actions is likely to exceed the amount recovered, or the debt is at least two years delinquent. (SSA may refer to such debt write-offs as terminating or conditionally writing off debts.) Temporarily writing off debts conditionally removes them from SSA’s accounts receivable balance, although SSA will refer debts to Treasury for collection through external tools. Prior to referring debts to Treasury, SSA notifies debtors and informs them of the appeal rights. SSA will re- establish these debts and its own collection efforts if it receives payment through these external collection tools or if the individual becomes re- entitled to Social Security benefits. External debt collection tools include tax refund offset, which withholds or reduces federal tax refunds to the individual; federal salary offset, which withholds or reduces wages and payments to federal employees; administrative offset, which withholds or reduces other federal payments (other than tax refunds or salary) to the individual; administrative wage garnishment, which garnishes wages and payments from private employers or state and local governments; and credit bureau referral, which reports delinquent debt to credit bureaus and may adversely affect an individual’s credit scores. Conditionally written-off debts remain subject to collections through any available tools until the debt is paid in full or the case is otherwise resolved. As of the end of fiscal year 2015, SSA had $1.5 billion dollars of overpayments that were conditionally written off. The average amount of written-off debt was about $4,100 and more than 75 percent of these debts were written off over 5 years ago. About 30 percent of people in written-off status were under age 18 when a parent received benefits for them and most of these recipients were written off in their late teens or twenties. The amount of outstanding DI overpayments increased by 70 percent between fiscal years 2006 and 2015, with the amount of debt newly detected and reestablished exceeding the amount collected, waived, or conditionally written off in 9 of the last 10 years. Moreover, while collections of prior debt have seen some increases over the past 10 years, they have not kept pace with new debt established (see fig. 1). SSA can take several actions against individuals who knowingly mislead SSA or make false statements to obtain benefits, and these actions serve as deterrents against potential fraud and abuse. Allegations of suspected wrongdoing are referred to SSA’s OIG by SSA staff or the public. OIG will assess each allegation received to determine whether they warrant investigation. According to SSA, those opened for investigation must be referred to the Department of Justice (DOJ) under the U.S. Attorney of jurisdiction any time OIG has grounds to believe there has been a criminal violation, as required by the Inspector General Act. Once DOJ reviews a case for potential civil or criminal action, OIG decides to impose civil monetary penalties (penalties) where warranted. Section 1129 of the Social Security Act provides for penalties against individuals who make certain false statements, representations, or omissions in the context of determinations of initial or continuing eligibility. Under that section, there are certain factors that must be considered when determining the amount of a penalty, which are: the nature of the individual’s actions, the circumstances under which the actions occurred, the individual’s history of prior offenses, the individual’s degree of culpability in the current case, the financial condition of the individual, and any other factors that justice may require. OIG officials told us that they exercise discretion when deciding which cases to pursue for penalties and take into account the age of the individual and the availability of OIG resources, among other considerations. A penalty of up to $5,000 may be imposed for each false statement or material omission, and an additional assessment up to double any payment that was made as a result, may be imposed. OIG’s Office of Counsel to the Inspector General (OCIG) imposes penalties, but subsequently refers penalties imposed to SSA’s Office of Operations for collection. According to SSA, because penalties result from fraud and misconduct, SSA cannot terminate collection or write off the debt without the permission of DOJ. Additionally, individuals cannot discharge penalties through bankruptcy. If OCIG declines to impose a penalty, it will consider whether administrative sanctions (sanctions) might be appropriate. If it determines that sanctions may be suitable, OCIG will return the case to SSA for further consideration. SSA is ultimately responsible for deciding whether sanctions are imposed in each case. If it imposes sanctions, the sanctioned individual will not receive benefit payments that he or she would have been entitled to for the duration of the sanction period: 6 months for a first occurrence, 12 months for a second occurrence, and 24 months for any subsequent occurrences. In fiscal year 2015, SSA identified about $1.2 billion in new DI overpayment debt and recovered about $857 million, of which 78 percent was collected by withholding some or all of beneficiaries’ monthly benefits (see fig. 2). SSA officials told us benefit withholding is their most effective tool for recovering overpayments and that collecting overpayments from individuals who no longer receive benefits can be difficult as these individuals may lack tax refunds or other federal and state payments to offset. Nonetheless, while withholding accounts for the bulk of collections, individuals repaying in this way make up less than half of people who have DI overpayment debt. Specifically, those repaying through benefit withholding represent about 311,000 of 637,000 people with DI overpayment debt. Benefit withholding plans, in which SSA withholds a specified amount of an individual’s benefits each month, often reflect lengthy repayment periods. We estimated the length of time needed to complete repayment for overpayments being collected in this way at the end fiscal year 2015 (see fig. 3) and found that over 50 percent of plans will take more than 3 years to complete. In addition, about 44,000, or 1 in 7 withholding plans are scheduled to be completed after the beneficiary’s 80th birthday. Given the age at which these beneficiaries are scheduled to complete repaying their debts, it is possible that many individuals will die before completing repayment. Moreover, individuals with the longest repayment periods account for a disproportionately large share of outstanding overpayment debt. For example, about 10 percent of individuals with withholding plans are scheduled to take over 30 years to repay their debts, but account for nearly a quarter of the outstanding debt to be recouped through withholding. We also found that over a third of withholding amounts were less than $50 and over half were less than $100 (see fig. 4). The median amount of monthly benefits being withheld from beneficiaries’ DI benefits to repay prior overpayments was $57. In addition, many withholding amounts represented a small percentage of recipients’ monthly benefits. About two-thirds of withholding amounts were less than 10 percent of beneficiaries’ monthly benefit (see fig. 5). SSA withheld a median of less than 8 percent of individuals’ monthly benefits for repayment. We also found, when we looked at data from the end of fiscal year 2015, that individuals with lower benefits had a larger share of their monthly benefits withheld for an overpayment debt (see appendix II, figures 7 and 8). For example, the median withholding level for those in the lowest quartile of monthly benefits was 10 percent, while for those in the highest quartile of monthly benefit, the median was 6.2 percent. Appendix II provides additional information on overpayments and withholding amounts and rates. Despite SSA’s heavy reliance on withholding benefits to recover debt, we found gaps in SSA’s guidance, oversight, and verification of information related to establishing withholding plans. The importance of determining and collecting an appropriate amount of debt from individuals is laid out in federal standards. The Federal Claims Collection Standards indicate agencies need to aggressively collect all debts arising out of activities of that agency, and that the size and frequencies of installment payments should bear a reasonable relation to the size of the debt and the debtor’s ability to pay. When pursuing debt, it is important for SSA to balance collection efforts against not placing too high a burden on an individual repaying a debt, and for SSA to have policies and procedures in place that ensure staff consistently make decisions on debt recovery that balance these opposing goals. However, as described below, SSA policy for determining reasonable beneficiary expenses is ambiguous, repayment plans are not subject to review or oversight, and beneficiaries’ self-reported financial information is not independently verified. In the absence of these elements, SSA cannot reasonably ensure that repayment amounts and time frames determined by its staff are appropriate and set in accordance with best practices and agency policy. SSA’s policies for how to consider beneficiaries’ expenses when determining benefit repayment amounts are ambiguous and leave much to the judgment of staff. Federal Internal Control Standards indicate that agencies’ policies and procedures should be clearly documented in administrative policies or operational manuals. According to SSA policy, staff are to obtain an SSA form 632 documenting financial information, including income and expenses, from a beneficiary to determine his or her ability to repay an overpayment when the beneficiary requests a repayment period exceeding 36 months. In these cases, SSA policy generally directs staff to withhold the amount by which an individual’s income exceeds expenses, or the rate permitted by income or assets if there are excess assets, and notes that this amount should generally not be less than $10 per month. However, a recent report prepared for SSA by an external auditor found that the agency has contradictory policies for determining what reasonable expenses are for beneficiaries. SSA’s policy states that a person’s particular circumstances and lifestyle determine whether expenses are ordinary and necessary, and that patterns of living are established over time and these patterns must be considered when evaluating the facts. At the same time, SSA policy also directs staff to not allow extraordinary and unnecessary expenses, regardless of the person’s standard of living. The report noted that these conflicting statements can lead to confusion when determining a feasible repayment rate. In contrast, the Internal Revenue Service provides detailed guidance on allowable living expenses when determining taxpayers’ ability to repay a delinquent tax liability. Its Collection Financial Standards include national guidelines for the cost of food, clothing, and other items and local standards for housing, utilities, and transportation costs. In the absence of clear guidance, SSA staff may struggle to determine what a beneficiary can reasonably afford to repay and could lead to inconsistencies across different repayment plans. SSA lacks effective oversight to know whether these plans are being consistently or appropriately administered. Federal Internal Control Standards indicate that key duties and responsibilities need to be divided among different individuals to reduce the risk of errors, and this should include separating responsibilities for authorizing, processing, recording, and reviewing transactions. In 2011, we reported that SSA staff are not required to obtain supervisory review of repayment plans, and recommended that SSA require supervisory review of repayment plans that extend beyond 36 months—the point at which SSA staff are directed to evaluate an individual’s ability to repay based on income, assets, and expenses. The agency disagreed with our recommendation and has not taken any action to implement it. In the course of our current review, SSA maintained that reviewing withholding plans would not increase recovery of overpayments, but the agency did not provide any analyses or studies to support its position. We continue to believe that supervisory review is an important part of ensuring that staff create appropriate repayment plans. In addition to lacking supervisory review, SSA also has not performed targeted reviews of repayment plans for adherence to policy, even though the agency systematically samples cases to review other aspects of DI overpayment decision making through its Continuous Quality Area Director Reviews, such as whether waiver decisions are properly documented. Without oversight provided by either supervisory or quality assurance reviews, the consistency and appropriateness of repayment amounts cannot be known. While oversight over repayment plans is lacking, any efforts to provide oversight could be hampered by a lack of documentation. Federal Internal Control Standards state that all transactions and other significant events need to be clearly documented, and that the documentation should be readily available for examination. SSA policy directs staff to obtain information and supporting documentation of the beneficiary’s income, assets, and expenses. This information should be documented by the beneficiary on a form 632 worksheet. Although SSA policy directs staff to retain copies of all supporting documentation (including bills and bank statements) for individuals whose overpayment is $75,000 or more, SSA policy does not explicitly require that supporting documentation— including the form 632 worksheet—be retained for lesser overpayment amounts. Since the median overpayment balance was about $3,200 at the end of fiscal year 2015, an audit trail for conducting oversight may not exist for many repayment plans. Our review of a small sample of overpayment case files—with overpayment amounts ranging from about $3,000 to about $165,000—raised questions about the sufficiency of documentation. Our non-representative sample of 16 cases was of overpayments being repaid through benefit withholding and with repayment periods exceeding 36 months. In 4 cases the overpayment was over $75,000 and retention of supporting documentation, such as mortgage statements, bills, or pay stubs, is directed by SSA policy; however, only 2 had any documentation verifying income or expenses. Further, in 3 of 8 cases in which beneficiaries were directed to complete a form 632—the worksheet used by the beneficiary to request a repayment period exceeding 36 months and to document relevant financial information—we found no evidence in the file that the form was completed. Ultimately, not requiring documentation to be retained for the record for all plans precludes the agency from reviewing the accuracy of repayment amounts in any future review. SSA may be missing opportunities to verify self-reported financial information, and therefore individuals’ ability to repay overpayments. Federal Claims Collection Standards state that agencies should obtain financial statements from debtors who represent that they are unable to pay in one lump sum and independently verify such representations whenever possible. Further, GAO’s Framework for Managing Fraud Risks in Federal Programs states that managers should take steps to verify self-reported data to effectively prevent and detect instances of potential fraud. While SSA policy directs staff to collect evidence (such as bank statements or bills) to corroborate self-reported financial information from some beneficiaries, the agency may be able to more efficiently and effectively validate self-reported information by other means that SSA already is leveraging for other purposes. For example, SSA is already using the Department of Health and Human Services’ National Directory of New Hires (NDNH) to determine an individual’s initial and continued eligibility for DI and SSI benefits. The value of this database was further demonstrated in March 2014 when SSA initiated the Quarterly Earnings Pilot to systematically identify and contact DI beneficiaries before their earnings cause them to accumulate large overpayments. According to SSA, the project identified 278 cases for contact using these data about 10 months earlier than it presumably would have identified them using old procedures and methods, uncovering about $3 million in overpayments. Nevertheless, SSA officials told us the agency has not studied the feasibility of using NDNH to verify income information from individuals seeking to establish withholding plans. Similarly, since 2011, SSA has used an automated process, Access to Financial Institutions (AFI), to verify Supplemental Security Income (SSI) applicants’ bank balances and detect undisclosed accounts. In November 2015, legislation was enacted that requires individuals to authorize SSA to access their financial information when deciding whether to waive their overpayments under certain circumstances. Although SSA uses the same form to collect self-reported information for overpayment waiver decisions and withholding plans, according to SSA officials, the agency has not yet determined whether this recent legislation allows it to use AFI for verifying withholding plans that extend beyond 36 months. Using information sources, like AFI and NDNH, to verify financial information provided by beneficiaries can help SSA ensure that it is collecting no more or no less than an individual can afford to pay. SSA reports that it has or is taking several steps to improve the collection of delinquent DI overpayment debt. These include: Modernizing the External Collection Operation (ECO) system: The ECO system identifies beneficiaries with delinquent debt and refers them to Treasury for external collection, using tools such as wage garnishment and tax refund offset. Currently, due to a system limitation, if a debtor has multiple debts, all of the debts must meet the criteria for referral to Treasury. If one debt is not eligible for referral— for instance if an individual is requesting that a debt be waived—none of the debts will be referred. According to SSA officials, as part of its Overpayment Redesign initiative, SSA plans to address this limitation by changing the way in which ECO stores debts to be able to select debts on an individual level as opposed to the aggregate beneficiary record level. This update should allow Treasury to use external collection tools against more debtors and potentially increase the amount of overpayments recovered through these tools. State Reciprocal Program: Under the State Reciprocal Program (SRP), managed by the Treasury as part of the Treasury Offset Program, the federal government enters into reciprocal agreements with states to collect debts by offsetting state payments due to debtors, such as state income tax refunds. This program provides SSA with an additional avenue to recover overpayments from delinquent debtors and may increase overall debt recovery. SSA published regulations in October 2011 and modified its systems to begin accepting offsets of state payments in 2013. According to SSA officials, SSA is dependent upon Treasury, who enters into reciprocal agreements with states, to expand the SRP to additional states. Address Verification Project: Implemented in February 2015, SSA’s Address Verification Project is expected to improve its ability to notify individuals with delinquent debt before referring them to external collection. Prior to implementation, SSA relied on the addresses in its records when notifying debtors of their delinquent debt. If the United States Postal Service returned the notice, SSA would cease collection activity, and use a contractor to obtain a current address to re-notify the debtor. It now obtains a current address from the contractor prior to mailing the notice to ensure it has current address information. SSA and GAO identified several additional options that could increase its overpayment recoveries. Officials told us that one change they are considering is to make the minimum monthly withholding amount 10 percent of an individual’s monthly benefit instead of the current $10 minimum, but SSA is in the early stages of studying this option and does not yet have time frames for implementing such a change. The agency noted that this could help minimize the number of long-term repayment plans and would put DI collections more in line with its SSI program. Beyond this, we identified two additional options based on past GAO work or conversation with SSA. Adjusting monthly benefit withholding according to cost of living adjustments (COLA): In 1996, we recommended that SSA adjust its monthly withholding amounts so that they keep pace with any annual increases in benefits. This option would accelerate overpayment recoveries with only minimal effect on recipients’ monthly benefits. Charging interest on debt: SSA officials told us that they have the authority to charge interest on delinquent overpayment debt and would like to be able to do so, but that they have not done so due to resource constraints and competing priorities. With respect to debts that are in the process of being repaid, such as through benefit withholding, SSA has determined that it does not have the authority to charge interest. As we discuss below, however, charging interest on debt that is being repaid could help protect the value of overpayments against the effects of inflation, especially over longer repayment periods. However, SSA lacks concrete plans and timeframes for studying and implementing these options or any other collection tools beyond those already in place, and SSA officials told us the agency currently has more pressing priorities than expanding its DI debt recovery tools. Federal Claims Collection Standards state that federal agencies shall aggressively collect all debts arising out of activities of, or referred or transferred for collection services to, that agency. Further, collection activities shall be undertaken promptly with follow-up action taken as necessary. Our analysis of the options we examined show that they hold promise for increasing SSA’s recovery of DI overpayments. We reviewed overpayments as of September 30, 2015 that were being repaid through benefits withholding, and determined how existing scheduled benefit withholding amounts would be affected by: (1) making the minimum withholding amount 10 percent of monthly DI benefits, (2) adjusting withholding amounts according to annual COLAs, and (3) charging interest on debts being collected through withholding. We took outstanding debts and withholding levels and computed the repayment schedule under the status quo and each alternative option. By definition, repayment schedules do not account for future changes such as individuals who gain or lose eligibility for benefits or whose ability to repay changes. Changes such as those mean that actual collections differ from scheduled collections. Options that increase withholding will speed recovery and reduce the effects of attrition, while charging interest will delay the completion of repayment and magnify the effects of attrition. Nonetheless, these options—implemented alone or in combination—have the potential to significantly increase collections of overpayment debt. Of the options we examined, setting a minimum withholding amount equal to 10 percent of an individual’s monthly DI benefit has the greatest potential to increase scheduled collections and reduce the amount of time to fully recover overpayments. We estimate this option would increase scheduled collections by $276 million over 5 years and reduce the median scheduled time to fully recover all beneficiary overpayments from 3.4 years to 2.3 years. Further, those beneficiaries scheduled to take over 20 years to complete repayment would decrease from 17 percent to 4 percent. Figure 6 below provides additional information on the effect of this scenario on scheduled repayment times. The increase in collections under this scenario comes entirely from individuals currently having less than 10 percent of their benefits withheld, and as such, the changes within this portion of the population are more pronounced when examined separately. Among those beneficiaries, the median scheduled repayment time would decrease by over half, from 5.9 to 2.5 years. Increasing the minimum withholding rate to 10 percent of monthly benefits could also be implemented in a way that improves collections while sparing or minimizing the effect on beneficiaries receiving the lowest monthly benefits. We estimate that only about 5 percent of the increase in collections would come from the quartile of beneficiaries receiving the lowest monthly benefits, in part because they already have a disproportionately larger amount of benefits withheld, and in part because increasing the withholding rate recovers much less dollar-wise from those receiving lower monthly benefit levels than those with higher benefits. We estimate that adjusting monthly withholding amounts according to COLAs or charging interest on overpayment debt would have a smaller effect than changing the minimum withholding rate to 10 percent of monthly benefits (see table 1), but could help protect the DI trust fund from the effects of inflation. For example, if SSA overpaid a dollar in 1985 and the beneficiary repaid that dollar 30 years later in 2015, the recovered dollar would have only 45 percent of the buying power of the 1985 dollar. Similarly, if SSA overpaid a dollar in 2010 and recovered it in 2015, the repaid dollar would have only about 92 percent of the buying power of the dollar SSA overpaid. Given that many withholding plans extend for decades, the effect of inflation can be significant. Charging interest on outstanding overpayment balances at the rate of inflation would counteract the effect of inflation and give repaid dollars the same buying power they had when erroneously paid years earlier. Other agencies already charge debtors interest. For instance, the Internal Revenue Service charges individuals with delinquent tax debt interest at a rate that is adjusted quarterly and is based on the federal short-term interest rate. Similarly, adjusting monthly withholding amounts according to COLA increases could help accelerate repayments and thus help negate the effect of inflation on amounts repaid to the DI Trust Fund. Implementing any combinations of the options we examined could result in even higher scheduled collections. For instance, setting a minimum withholding rate of 10 percent of monthly DI benefits and charging an interest rate of 1 percent would increase scheduled collections by $287 million over the next 5 years, while these options implemented individually would be scheduled to bring in $276 million and $7 million respectively. SSA’s OIG has in recent years increased its use of penalties against individuals who knowingly mislead the agency, according to SSA’s Office of Counsel to the Inspector General (OCIG), which is responsible for imposing penalties. According to SSA officials, in fiscal year 2010, OCIG successfully resolved 89 cases and imposed penalties totaling approximately $3.9 million. That increased to 313 cases and more than $17.6 million in fiscal year 2015. OCIG officials attribute this increase to improving its evaluations process and more management focus on the use of penalties as a deterrent. Increased penalties notwithstanding, officials told us SSA lacks reliable data on the status of penalties, how much of penalty amounts have been collected, and how much is delinquent. While OCIG imposes penalties on individuals, SSA’s Office of Operations is ultimately responsible for collecting these amounts. SSA officials said they could not provide us with comprehensive data on the number and amount of penalties paid because of limitations in their computer systems, and added that they would need to review each individual case to determine its repayment status. Federal Internal Control Standards indicate that program managers need appropriate data to monitor the performance of their program and help ensure accountability. Without valid data on the disposition of penalties, SSA cannot determine whether penalties are being used effectively across the agency and if individuals who mislead the agency are paying as appropriate. SSA reports that it is planning a number of steps to better track imposed penalties, and ultimately the amounts collected as part of a larger effort to improve its processing of overpayments and other debts. According to its plans, SSA hopes to: by fiscal year 2018, assign penalties a unique transaction code to be able to track them through the collection process; and by fiscal year 2020 unbundle penalties from other debts owed by an individual in its ROAR database—which is used to track debts and collections—in order to allow remittances to be directly applied to penalties as opposed to an individual’s cumulative debt. While such improvements could help address the limitations we identified, they are a number of years away. Further, SSA notes in its plans that they may be subject to delays related to resource constraints. Moreover, SSA is still in the process of analyzing and planning potential fixes. As such, it is uncertain whether SSA will meet its intended time frames or whether its currently planned efforts may change and ultimately address the shortcomings it identified. SSA has met with limited success collecting on imposed penalties, and is not using some tools to better ensure that individuals who knowingly mislead the agency pay their penalties. Officials said SSA currently only collects penalties by either withholding DI or other SSA benefits, or relying on individuals to voluntarily remit penalty amounts. A recent OIG audit highlighted the difficulty that SSA has in collecting delinquent penalties. In a sample of 50 penalties imposed between calendar years 2010 and 2012 totaling $1.9 million, OIG found that about $1.7 million of that amount remained uncollected as of July 2014. The majority of that amount (approximately $920,000) was associated with individuals not receiving benefits and with whom SSA had no ongoing collection actions—the same category of individuals who could be targeted with external collection tools. While officials noted the agency determined it can refer penalties for collection through some external collection tools, such as wage garnishment, tax refund offsets, and administrative offsets, the agency has not utilized them. According to officials, SSA drafted a regulation for implementing these options; however, the regulation is still undergoing internal review and SSA does not yet have time frames for implementing these options. Moreover, the agency determined it is prohibited by statute from referring delinquent penalties for collection through other tools, such as federal salary offset, credit bureau reporting, and assessing interest. Nevertheless, SSA has not explored pursuing legislative authorities to use these tools. By not collecting some delinquent penalties and not considering additional tools to do so, SSA may be undermining the deterrent value of penalties against potential fraud. GAO’s Framework for Managing Fraud Risks in Federal Programs indicates that a consistent response to fraud demonstrates that management takes this subject seriously, and that the likelihood that individuals who engage in fraud will be punished serves to deter others from engaging in fraudulent behavior. SSA collaborated with OIG to change its sanctions procedures in 2013 in an effort to more consistently impose sanctions across the agency. Prior to this change, officials told us, SSA field offices had broad discretion to impose sanctions. SSA officials told us that some offices were more aggressive in pursuing sanctions and that an offense that could result in sanctions in one office might not do so in another office. The new procedures direct that potential sanctions cases first be evaluated for prosecution or civil action by DOJ and then by OIG for the imposition of civil monetary penalties. Ultimately, SSA is responsible for determining whether to impose sanctions based on the circumstances of the case, such as whether evidence exists to show that the individual knowingly misled the agency. The relevant SSA field office is responsible for developing the documentation to support the sanction, which is then reviewed by a sanctions coordinator in the SSA regional office. Despite changes in decision-making for sanction cases, unreliable data and shortcomings in how SSA tracks sanctions prevent the agency from reasonably ensuring that sanctions are imposed as appropriate, and ultimately prevent SSA from assessing whether its recent procedural changes had their desired effect. SSA cannot reasonably ensure sanctions are imposed as appropriate: SSA officials told us that they could not provide us with reliable data on the disposition of sanction cases. SSA currently tracks the disposition of sanctions in a database, which includes whether sanctions were imposed and the sanction period (i.e., the period of time during which beneficiaries will not receive benefits). However, this database requires SSA staff receiving information on sanctions to manually enter information about the sanction into the database, which lacks data checks or related oversight and may lead to errors and omissions. For instance, officials in three regional offices with oversight responsibilities told us that decisions on sanctions cases are generally communicated between OIG and the relevant field office. If these officials are inadvertently not included on the communications, they cannot ensure that the sanctions database is properly updated. One regional official said this has resulted in instances in which SSA headquarters wanted to know why sanctions were not imposed in particular cases, but this official did not have the information to respond correctly. Furthermore, officials in two regions noted that SSA’s database does not generate alerts when field offices fail to take action on potential sanctions cases, thus making it incumbent on regional coordinators to manually track and follow up on the status of cases. One regional official noted that the lack of tracking resulted in several instances in which SSA was pursuing sanctions years after the alleged wrongdoing. SSA cannot evaluate procedural changes: Beyond the disposition of specific sanctions, officials told us that they also lacked reliable data on the number of sanctions imposed and whether this number has changed since the current procedures were instituted in 2013. This is likely a result of the limitations in how sanctions data are captured as described earlier. Moreover, SSA conducted an internal assessment to determine whether field offices followed correct procedures for implementing sanctions. Specifically, SSA selected a sample of cases that were originally referred to OIG, and were subsequently returned by OCIG to field offices. SSA determined that sanctions were not imposed in the majority of cases in which sanctions were likely warranted, often because field offices did not take action on cases in a timely manner. The study did not determine why the agency failed to act on these cases in a timely manner. SSA officials speculated that it may be due to the difficulty of imposing harsh punishment on beneficiaries and because sanctions are labor intensive for SSA staff. Federal Internal Control Standards indicate that managers need to compare actual performance to planned or expected results and analyze significant differences, and that operational data is needed to determine whether they are meeting their goals for accountability. Furthermore, as indicated in GAO’s Framework for Managing Fraud Risk in Federal Programs, a prompt and consistent response to fraud demonstrates that agency management takes reports seriously and serves to deter others from engaging in fraudulent behavior. As a result of SSA’s internal evaluation, the agency recognized the need to better track sanction cases, improve how it communicates decisions, and act on them in a timely manner. However, officials said the agency is in the early stages of determining how it will address these identified shortcomings, and ultimately ensure the deterrent value of sanctions. More recently, OCIG officials told us that they plan additional changes in how OCIG refers cases back to SSA for possible sanctions. According to SSA, OCIG will share additional information with SSA that may be helpful in SSA’s sanctions determinations. Notwithstanding this change, complete and accurate data will still be needed to effectively manage and evaluate SSA’s sanctions program. While overpayments account for a relatively small portion of all DI benefit payments, it is incumbent on SSA to collect these debts as a good steward of public funds. Improvements in collecting overpayment debt, however small, could help strengthen the solvency of the DI trust fund. In short, the collection of overpayment debts warrants more attention than SSA has demonstrated to date. Absent clear policies and oversight procedures for establishing and reviewing withholding plans, which are heavily relied on by SSA to recover the bulk of overpayments, SSA cannot be sure that beneficiaries are repaying debts in appropriate amounts within appropriate time frames. Further, SSA could be collecting too little or too much money each month from beneficiaries by not leveraging available tools to verify beneficiaries’ ability to pay. By not implementing additional debt collection tools that would speed up lengthy withholding plans or ensure that the value of collections is not diminished by inflation, SSA is missing opportunities to restore debt to the DI trust fund. Increasing the minimum monthly withholding amount would promote more equity in how SSA deals with overpayments across its programs, while improvements to procedures and tools for establishing repayment plans would better protect those beneficiaries who truly lack resources to pay. As part of its efforts to ensure the integrity of the DI trust fund, penalties and sanctions are key tools that the agency needs to use effectively. By not using all available tools to collect penalties and by not consistently imposing and tracking sanctions, SSA weakens its stance that fraud is unacceptable, and its ability to deter other individuals from attempting to collect benefits for which they are ineligible. To ensure effective and appropriate recovery of DI overpayments and administration of penalties and sanctions, we recommend the Acting Commissioner of the Social Security Administration take the following 8 actions: Clarify its policy for assessing the reasonableness of expenses used in determining beneficiaries’ repayment amounts to help ensure that withholding plans are consistently established across the agency and accurately reflect individuals’ ability to pay. Improve oversight of DI benefit withholding agreements to ensure that they are completed appropriately. This could include requiring supervisory review of repayment plans or sampling plans as part of a quality control process, and requiring that supporting documentation for all withholding plans be retained to enable the agency to perform such oversight. Explore the feasibility of using additional methods to independently verify financial information provided by beneficiaries to ensure that complete and reliable information is used when determining repayment amounts. These additional tools could include those already being used by the agency for other purposes. Adjust the minimum withholding rate to 10 percent of monthly DI benefits to allow quicker recovery of debt. Consider adjusting monthly withholding amounts according to cost of living adjustments or charging interest on debts being collected by withholding benefits. Should SSA determine that it is necessary to do so, it could pursue legislative authority to use recovery tools that it is currently unable to use. Pursue additional debt collection tools for collecting delinquent penalties. This includes taking steps to implement tools within its existing authority and exploring the use of those not within its authority, and seeking legislative authority if necessary. Take steps to collect complete, accurate, and timely data on, and thereby improve its ability to track both: civil monetary penalties and their disposition; and administrative sanctions and their disposition. We provided a draft of this report to the Social Security Administration for comment. In its written comments, reproduced in appendix III, SSA agreed with 7 of our 8 recommendations and disagreed with 1. SSA also raised some broader concerns about the focus of our report. SSA stated that our report confuses two distinct issues: recovering overpayments and deterring fraud through civil monetary penalties and administrative sanctions. We agree that these issues are distinct; however, both are important parts of safeguarding the integrity of the DI program, and ensuring that payments are made in the right amounts to the right individuals. SSA stated that overpayments are not necessarily the result of fraud. We agree and note in our report that overpayments occur for a number of reasons, including fraud. SSA also stated it believed it was misleading to include deterring fraud in the title of our report, noting that penalties and sanctions are not themselves findings of fraud, and are based on, among other things, findings of false or misleading statements or knowing omissions by individuals. We acknowledge this distinction, and made revisions in the title and to the report in response to SSA’s comments. However, we continue to believe that the consistent use of these tools serves as a deterrent against those who would engage in fraud or abuse of the DI program. SSA agreed with our recommendation to clarify its policies regarding the reasonableness of expenses when determining beneficiaries’ repayment amounts. SSA noted that it has already taken actions to clarify its policies regarding overpayments and waivers, and informed us in its comments that it delivered video training to its employees in 2015 on these topics. SSA added that it will continue to assess efforts and make other improvements to ensure consistent and accurate application of policy. To the extent that SSA’s efforts also address unclear written policies, such actions could help meet the intent of the recommendation. SSA agreed with our recommendation to improve the oversight of benefit withholding plans and said it will explore options to do so. However, it disagreed with requiring supervisory review of repayment plans. We present supervisory review as just one option for improving oversight, and there may be other approaches SSA could explore for improving oversight in this area. Nevertheless, we continue to believe that this option—recommended in prior GAO work—can be an effective option for ensuring that staff create appropriate repayment plans. SSA agreed with our recommendation to explore the feasibility of using additional methods to independently verify financial information provided by beneficiaries when determining repayment amounts. SSA agreed with our recommendation to adjust the minimum withholding rate to 10 percent of monthly DI benefits, and noted that the President’s fiscal year 2017 budget submission contains a legislative proposal to do so. We acknowledge that SSA recently included a paragraph in its budget submission discussing this proposal. SSA may need to work closely with Congress to ensure this change is realized. SSA disagreed with our recommendation to consider adjusting monthly withholding amounts according to cost of living adjustments or charging interest on debts being collected through withholding benefits. For debt subject to benefit withholding, which is not considered delinquent debt, SSA asserted that these measures would not have a significant effect on the amount of debt recovered, especially compared to the option of making the minimum withholding rate 10 percent of monthly benefits. For delinquent debt, SSA asserted charging interest on debts would require substantial changes to multiple systems that affect its overpayment businesses processes, and would require extensive training to its employees. While SSA stated it has studied the potential changes needed to charge interest on debt, without further consideration of, for example, the costs and benefits of charging interest or adjusting withholding amounts according to cost of living adjustments, SSA cannot know the extent to which these options would improve debt recovery efforts or help protect the value of debts against the effects of inflation, which can be substantial given that withholding plans can take decades to complete. SSA agreed with our recommendation to pursue additional tools to collect delinquent penalties, and stated that it has begun drafting regulations to use existing external debt collection tools, as noted in our report. However we state in our report that SSA lacks timeframes for completing this action. SSA reported that it is also developing a legislative proposal to allow it to use other tools it cannot currently utilize, such as reporting these debts to credit bureaus and withholding federal salary payments. Such actions, if implemented as intended, could help meet the intent of the recommendation. SSA agreed with our recommendations to improve its ability to track penalties and sanctions, and noted that it is developing workload tracking tools for both, which it expects to implement in fiscal year 2016, and is in the planning stages of an overpayment redesign effort said that should result in more complete, accurate, and timely data for penalties. Such actions, if implemented as intended, could help meet the intent of the recommendations. SSA also provided technical comments on our draft that we incorporated as appropriate. In particular, SSA noted that our draft report contained sensitive information on its sanctions process, which we agreed to exclude. We are sending copies of this report to the appropriate congressional committees, the Acting Commissioner of the Social Security Administration, and other interested parties. In addition, the report will be will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or bertonid@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. In conducting our review of how the Social Security Administration (SSA) recovers Disability Insurance (DI) overpayments and oversees civil monetary penalties and administrative sanctions, our objectives were to examine (1) how and to what extent SSA is recovering DI overpayments, and (2) SSA’s procedures for imposing penalties and sanctions, and how often they are used. We conducted this performance audit from November 2014 to April 2016, in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To determine how SSA recovers DI overpayments, we reviewed relevant federal laws and regulations, and SSA policies and procedures. Regarding the extent to which SSA recovers overpayments, we obtained available data from SSA on the amounts of overpayments detected, waived or written-off, collected, and reestablished between fiscal years 2006 through 2015, as well as data on the cumulative DI overpayment debt balance at the start and end of each fiscal year in that period. We also obtained corresponding data on the amount of DI overpayment debt recovered through internal and external debt collection tools. To examine SSA efforts to improve its recovery of overpayments, we reviewed agency plans, and publicly available documents such as its annual performance plan, and past GAO and Office of Inspector General (OIG) reports. We also interviewed SSA headquarters and regional staff responsible for overseeing the collection of overpayments. To obtain additional insight on SSA’s recovery of DI overpayments, we interviewed officials from an organization representing SSA field office managers (National Council of Social Security Management Associations) and an organization representing advocates for individuals with disabilities (National Disability Rights Network). To gain perspective on how SSA sets and documents overpayment repayments plans, we reviewed a non-representative sample of 16 overpayments being repaid through benefit withholding established in fiscal year 2015. We selected a mixture of cases in terms of (1) whether the original overpayment amount was over or under $75,000, the threshold at which SSA policy requires the retention of documentation supporting income and expenses; and (2) whether more or less than 10 percent of the beneficiaries’ monthly DI benefits were withheld to repay the overpayment. We then randomly selected cases for review from each of the 4 subsets of cases that result from applying our two criteria. In reviewing these cases, we sought to determine how SSA verified beneficiaries’ ability to repay overpayment and how it documented these decisions, including reviewing whether SSA retained supporting documentation in accordance with its policies. This sample is non- representative and our results are not generalizable to all benefit withholding plans. To examine the extent to which SSA is recovering DI overpayments and options for improving collections, we obtained data on DI overpayments as of September 30, 2015. The data we obtained came from SSA’s Master Beneficiary Record (MBR) and Recovery of Overpayments, Accounting and Reporting (ROAR) systems. We limited our data request and analysis to those overpayments publicly reported by SSA. Using these data, we calculated the effect of potential enhancements in terms of how much more SSA would be scheduled to collect in fiscal years 2016 through 2020, and how much faster SSA would be scheduled to recover these overpayments in full. Our estimates are based on withholding amounts and overpayments as of the end of fiscal year 2015 and the assumption that everyone will continue to pay based on the current schedule. This implies there will be, for example, no future changes in eligibility for benefits, no deaths among people having benefits withheld, and no changes in withholding amounts. We also did not attempt to estimate future overpayments. As such, actual total collections would differ from scheduled total collections. The enhancements we discuss below are based on information obtained from SSA or through examining past GAO work. We did not conduct an exhaustive review of options for improving debt recovery and there may be others that we did not consider. In reporting scheduled repayments for all of our enhancement scenarios, we adjusted repayment amounts by four inflation rates: 0, 2.0, 2.7, and 3.4 percent. This gives the reader a sense of the extent to which each of the policy options counteracts the effects of inflation, either by inflation- adjusting repayments, or simply speeding up SSA’s recovery of overpayments, thereby reducing its exposure to inflation. For each month, we then computed the recipient’s remaining balance assuming that the recipient repaid either their normal monthly repayment amount or the remaining balance, whichever was less. We included the 0 percent inflation scenario because it isolates the effects of factors other than inflation. Social Security estimated long-range inflation scenarios of 2.0, 2.7, and 3.4 percent in its 2015 Trustee’s Report. We computed total repayments in each scenario as the sum of monthly payments. We assumed that people make monthly payments until they have paid off their entire balance and then stop paying. If their balance was less than their usual monthly payment, we assumed they paid exactly the outstanding balance in their final month. We estimated scheduled repayments for the following scenarios: 1. Baseline collections (no change): We examined beneficiaries’ outstanding overpayment balances as of September 30, 2015, as well as their current monthly repayment rates. We used that information to estimate, at current withholding rates, when beneficiaries are scheduled to complete repaying their overpayment debts, age at scheduled repayment, as well as how much they are scheduled to repay over the next five fiscal years. 2. Setting the minimum withholding rate to 10 percent of monthly DI benefits: We computed the standing repayment amount as the greater of 10 percent of the recipient’s post-COLA benefits in each month or the recipient’s actual repayment amount in the ROAR system as of September 30, 2015. 3. Adjusting monthly withholding amounts by the cost of living adjustment (COLA): These scenarios increased the withholding amounts that SSA reported by 0, 2.0, 2.7, or 3.4 percent effective in January of each year. The 2.0, 2.7, and 3.4 percent estimated COLAs are based on SSA’s long-range inflation estimates in the 2015 Social Security Trustee’s report. This scenario adjusts both benefit and withholding amounts. 4. Charging interest: This scenario increases the remaining balance at the beginning of year by 1 percent in the 0 percent COLA scenario, and an interest rate equal to the rate of inflation in the other scenarios. We chose the 1 percent interest rate in the no inflation scenario because it is the rate of interest the US government is allowed to charge in calendar year 2016 on delinquent debts. 5. Combined scenarios: We report the results of a few policy options in combination. It is important to note that our combination of interest and COLA effectively undo the effects of inflation on both monthly repayment amounts and on total debt owed. We assessed the reliability of the data we used by checking for extreme and implausible values and by comparing the totals in them to published sources and found them to be sufficiently reliable for our use. In estimating scheduled repayments for the above scenarios, we made a number of decisions and assumptions about overpayments and withholdings in the custom file provided by SSA. The data provided by SSA listed all overpayments that SSA is either actively trying to collect or has conditionally written off, referred to Treasury, and will collect if the beneficiary becomes eligible for disability or retirement benefits. These data lists both a claimant—the person whose disability creates eligibility for DI benefits—and a beneficiary—who may be the claimant, the claimant’s spouse, or a dependent of the claimant. SSA officials told us that the agency can seek repayment from the claimant, beneficiary, or anyone else receiving benefits on the claimant’s earnings record. We aggregated this overpayment level data to the beneficiary level, taking the maximum withholding amount per beneficiary if the beneficiary’s account showed more than one overpayment, and adding together withholding in rare instances when one person benefited from overpayments to multiple claimants. If a beneficiary had withholding on any one of the overpayments on his or her record, we treated all overpayments on the account as subject to recovery through withholding. This methodology can misstate repayment times in situations where, for example, a beneficiary had overpayments both on their own disability claim and their parent’s disability claim, and both parties are involved in repaying the beneficiary’s overpayments. We identified and excluded from our analysis beneficiaries who appeared to be deceased by matching their Social Security Numbers (SSN) to the full SSA Death Master File. This may exclude some recoverable overpayments from our analysis because SSA officials told us that they could seek repayment from anyone receiving benefits on the claimant’s earnings record. While about 40 percent of the conditionally written off recipients matched to the full SSA Death Master File, only about 0.01 percent of people in withholding status matched to the full SSA Death Master File. We computed the time to repay under the status quo condition by dividing the sum of current balances for a beneficiary by the withholding amount, calculated as described above. In general, this yields repayment schedules that end—as expected—no later than December 2049 due to limitations of SSA’s data system. In a handful of cases—where we aggregate one beneficiary across multiple claimants—we get longer repayment times. To identify individual beneficiaries, we used the beneficiary’s SSN when it was available. When the beneficiary’s SSN was not available, we developed a replacement unique identifier, first under the assumption that there was only one person with a given name and date of birth, for each claimant SSN; and if the name was missing, then the assumption that each combination of a claimant SSN and the beneficiary identification code variable identifies a unique person. The beneficiary identification code indicates whether the beneficiary is, for example, the claimant’s first child, second child, or spouse. This methodology may slightly overstate the total number of beneficiaries in the data, since it will miss cases where the same person is the beneficiary of two different claimants. In order to count the number of recipients in withholding, voluntary repayment, conditionally written off, and neither paying nor in written off status, we developed categorization rules to resolve ambiguities arising from the small percentage of beneficiaries who had debts in more than one status. Specifically, we considered people to be in withholding status if any of their overpayments indicated that they were in “current pay” status and had a withholding amount. We considered people to be making voluntary remittances if they had no withholding on any overpayment and had a monthly voluntary remittance amount listed on at least one account. We considered beneficiaries to be conditionally written off if all of their overpayments were flagged as conditionally written off. We categorized the remaining beneficiaries as active, but not currently repaying. Throughout this analysis, we use the monthly benefit amount— i.e., the benefits due before a variety of adjustments—to characterize benefit levels. We adjusted future payments in the three scenarios with positive inflation by dividing all of the receipts in a given calendar year by (1+r)(t-2015) where r is the inflation rate of .020, .027, or .034 and t is the year. This assumes that all of the year’s inflation takes place on January 1, and will tend to overstate inflation early in the year. This stylized assumption means that our COLA plus interest scenario can precisely undo the effects of inflation when, in fact, appropriately set annual COLA and interest charges would typically overcorrect for inflation during some months and under-correct for it during others. To determine how SSA imposes penalties and sanctions, we reviewed applicable federal laws, regulations, and guidance. We also reviewed SSA plans for improving its administration of penalties and sanctions, internal studies of its use of sanctions, as well as past OIG reviews of penalties and sanctions. We interviewed officials in SSA headquarters who oversee their use, OIG (which investigates potential fraud), and OIG’s Office of Counsel to the Inspector General (OCIG) which has responsibility for imposing penalties and considering whether sanctions may be warranted. We requested available data from SSA on its use and the disposition of penalties and sanctions. However, after discussions with SSA officials regarding the agency’s procedures for collecting and tracking penalties and sanctions, we determined that these data were not sufficiently reliable for our use and did not include them in our report. To gain further insight on how sanctions are tracked and imposed, we interviewed regional sanctions coordinators—individuals responsible for reviewing sanctions determinations—in three of SSA’s regions: Atlanta, Chicago, and San Francisco. We chose these regions based on variation in terms of sanctions workload and error rates according to past SSA internal evaluations. The views of these officials are not generalizable across all of SSA. We also spoke to officials in SSA’s New York regional office, which developed a database for tracking the disposition of sanctions. This appendix provides more information about individuals repaying DI overpayments by having a portion of their monthly benefits withheld— notably, the relationship between their monthly benefit payments and the amount of benefits withheld. All data presented is for outstanding overpayment balances as of September 30, 2015. Throughout this report, the benefit levels we report are SSA’s “monthly benefit amount,” which is the amount due to beneficiaries before withholding or other adjustments. Figures 7 and 8 below break down this population with benefit withholding into 10 equal groups (deciles) according to the amount of their monthly benefits. Figure 7 shows, for each decile, the median percentage of benefits being withheld. Figure 8 shows the median dollars withheld for each decile. When we compared individuals with lower monthly benefit amounts to those receiving larger benefits amounts, we found that those with the smallest benefits had a higher percentage of their benefits withheld to repay overpayments. The figures also show that the majority of individuals with a larger monthly benefit amount have less than 10 percent of their DI benefits withheld. Figure 8 shows that the difference between withholding 10 percent of the median DI benefit and actual median withholding is more than $86 per month in the top decile, which consists of more than 31,000 beneficiaries. Tables 2 to 5 below provide additional information on the relationship between withholding and benefit amounts. For each table, we report not only the median (the 50th percentile) of the distribution, which offers a sense of the “typical” outcome, but also: the 25th and 75th percentiles which give a sense of the experience of beneficiaries somewhat below and above the median, respectively; the 5th and 95th percentiles to offer a sense of the experiences of people experiencing fairly extreme outcomes; the number of beneficiaries from which we computed each number; and the standard deviation. The withholding and repayment time averages are often significantly above the median because these distributions are not symmetric; rather people with the largest withholding levels are much further above the median than the people with the smallest withholding levels are below the median. For example, table 3 reports that the 95th percentile withholding level for all beneficiaries ($517) is $460 higher than the median of $57, while the 5th percentile ($10) is $47 below the median. This asymmetric distribution of withholding levels at higher amounts produces an average of $133, which is more than twice the median and more than the 75th percentile of the withholding distribution ($101). In addition to the contact named above, Michele Grgich (Assistant Director), Daniel R. Concepcion (Analyst-in-Charge), Martin Scire, and Robert Letzler made key contributions to this report. Additional contributors include: Susan Aschoff, James Bennett, Kathleen Donovan, Alex Galuten, Arthur Merriam, Monica Savoy, and Almeta Spencer. Disability Insurance: SSA Could Do More to Prevent Overpayments or Incorrect Waivers to Beneficiaries. GAO-16-34. Washington, D.C.: October 29, 2015. Disability Insurance: Preliminary Observations on Overpayments and Beneficiary Work Reporting. GAO-15-673T. Washington, D.C.: June 16, 2015. Supplemental Security Income: SSA Has Taken Steps to Prevent and Detect Overpayments, but Additional Actions Could be Taken to Improve Oversight. GAO-13-109. Washington, D.C.: December 14, 2012. Disability Insurance: SSA Can Improve Efforts to Detect, Prevent, and Recover Overpayments. GAO-11-724. Washington, D.C.: July 27, 2011.
SSA's DI program provides cash benefits to millions of Americans who can no longer work due to a disability. While most benefits are paid correctly, beneficiary or SSA error can result in overpayments—that is, payments made in excess of what is owed. In fiscal year 2015, SSA detected $1.2 billion in new overpayments, adding to growing cumulative debt. Further, when individuals inappropriately obtain benefits in certain situations, SSA can levy penalties or withhold benefits for a period of time. GAO was asked to study the use of these actions, and SSA efforts to recover overpayments. This report examined how and to what extent SSA recovers overpayments, and imposes penalties and sanctions. GAO analyzed data on existing DI overpayments and repayment amounts at the end of fiscal year 2015 to determine the effect of potential improvements in recovery methods on collection amounts; and reviewed relevant federal laws, regulations, policies, and studies. In fiscal year 2015, the Social Security Administration (SSA) recovered $857 million in Disability Insurance (DI) overpayments that it erroneously made to beneficiaries; however, SSA is missing opportunities to recover more. More than three-fourths of the recovered overpayments in fiscal year 2015 were collected by withholding all or a portion of a beneficiary's monthly benefits. SSA's policy is to set withholding repayment amounts based on a beneficiary's income, expenses, and assets, but its policy regarding which expenses are reasonable is not clear. Moreover, SSA cannot know if repayment periods and amounts are consistently determined due to a lack of oversight, such as supervisory review or targeted quality reviews. Further, SSA lacks concrete plans for pursuing other debt recovery options, while GAO's analysis suggests that some options could potentially increase collections from individuals having their benefits withheld. For example, about half of withholding plans at the end of fiscal year 2015 extended beyond SSA's standard 36-month time frame, and could be shortened. Making the minimum monthly repayment 10 percent of a beneficiary's monthly benefit, instead of the current $10 minimum, would shorten the median length of all scheduled withholding plans by almost a third (from 3.4 years to 2.3 years) and result in an additional $276 million collected over the next 5 years. While SSA officials reported an increase in recent years in the amount of civil monetary penalties imposed, SSA currently lacks reliable data to effectively track the disposition of penalties and administrative sanctions. For example, SSA cannot readily track the amounts ultimately collected from penalties, which are fines imposed by the Office of the Inspector General (OIG) and collected by SSA. Further, SSA currently has only two paths for collecting on penalties—withholding benefits and voluntary payment. A recent OIG audit found that the majority of uncollected penalty amounts it reviewed were from individuals who were not receiving SSA benefits and with whom SSA had no ongoing collection actions. SSA determined it is able to use certain alternative collection tools, such as wage garnishment, but only recently began drafting regulations to use them, and the regulations are still undergoing internal review. In addition, SSA lacks and had not explored obtaining authority to use other tools for collecting penalties that it uses for collecting overpayments—such as credit bureau reporting. Related to administrative sanctions, SSA could not provide reliable data on how often it imposes sanctions, a punishment in which benefit payments are temporarily stopped. SSA's process of manually entering sanctions information into a database may be subject to errors or omissions. Regional officials said this can result in incomplete information and staff not taking appropriate action on cases. SSA changed its procedure in 2013 to direct that all potential sanctions first be reviewed for potential prosecution or civil monetary penalties, but SSA's lack of reliable data prevents it from determining whether this new procedure achieved the intended effect of more consistent application of sanctions. In an internal evaluation of its procedures, SSA identified weaknesses with how sanctions decisions are tracked and communicated, but it is in the early stages of deciding how to address them. The shortcomings in SSA's use of penalties and sanctions potentially diminish the deterrent value of these actions against individuals who may fraudulently obtain benefits. GAO is making eight recommendations to SSA, including: clarify its policy and improve oversight related to debt repayment plans, pursue additional recovery options for overpayments and penalties, and improve its ability to track penalties and sanctions. SSA agreed with seven, but disagreed with a recommendation on debt recovery options. GAO maintains the options merit exploration, as discussed further in the report.
The Post-Katrina Emergency Management Reform Act of 2006 (Post- Katrina Act) required that FEMA establish the national preparedness system to ensure that the nation has the ability to prepare for and respond to disasters of all types, whether natural or man-made, including terrorist attacks. The Community Preparedness Division is responsible for leading activities related to community preparedness, including management of the Citizen Corps program. According to fiscal year 2008 Homeland Security Grant Guidance, the program is to bring together community and government leaders, including first responders, nonprofit organizations, and other community stakeholders. Serving as a Citizen Corps council, government and nongovernment stakeholders are to collaborate in involving community members in emergency preparedness, planning, mitigation, response, and recovery. Councils and partner programs register online to be included in the national program registries. The Division also supports the efforts of non-DHS federal “partner programs,” such as the Medical Reserve Corps, that promote preparedness and the use of volunteers to support first responders. The CERT program’s mission is to educate and train people in basic disaster preparedness and response skills, such as fire safety, light search and rescue, and disaster medical operations, using a nationally developed, standardized training curriculum. Trained individuals can be recruited to participate on neighborhood, business, or government teams to assist first responders. The mission of the Fire Corps program is to increase the capacity of fire and emergency medical service departments through the use of volunteers in nonoperational roles and activities, including administrative, public outreach, fire safety, and emergency preparedness education. FEMA also is responsible for a related program, the Ready Campaign, which works in partnership with the Ad Council, an organization that creates public service messages, with the goals of raising public awareness regarding the need for emergency preparedness, motivating individuals to take steps toward preparedness, and ultimately increasing the level of national preparedness. The program makes preparedness information available to the public through its English and Spanish Web sites (www.ready.gov and www.listo.gov), through printed material that can be ordered from the program or via toll-free phone lines, and through public service announcements (PSA). The Ready Campaign message calls for individuals, families, and businesses to (1) get emergency supply kits, (2) make emergency plans, and (3) stay informed about emergencies and appropriate responses to those emergencies. FEMA faces challenges in measuring the performance of local community preparedness efforts because it lacks accurate information on those efforts. FEMA is also confronted with challenges in measuring performance for the Ready Campaign because the Ready Campaign is not positioned to control the placement of its preparedness messages or measure whether its message is changing the behavior of individuals. According to FEMA officials, FEMA promotes citizen preparedness and volunteerism by encouraging collaboration and the creation of community Citizen Corps, CERT, and Fire Corps programs. FEMA includes the number of Citizen Corps councils, CERTs, and Fire Corps established across the country as its principal performance measure. However, FEMA faces challenges ensuring that the information needed to measure the number of established, active units is accurate. In our past work we reported on the importance of ensuring that program data are of sufficient quality to document performance and support decision making. Although not a measure under the Government Performance Result Act, FEMA programs report the number of local units registered as a principal performance measure; however, our work showed that the number of active units reported may differ from the number that actually exist. For example, as of September 2009: Citizen Corps reported having 2,409 registered Citizen Corps councils nationwide that encompass jurisdictions where approximately 79 percent of the U.S. population resides. However, 12 of the 17 registered councils we contacted during our site visits were active and 5 were not. The CERT program reported having 3,354 registered CERTs. Of the 12 registered CERTs we visited, 11 were actively engaged in CERT activities, such as drills, exercises, and emergency preparedness outreach, or had been deployed to assist in an emergency or disaster situation, although 1 had members that had not been trained. One registered CERT was no longer active. State officials in two of the four states also said that the data on number of registered programs might not be accurate. One state official responsible for the Citizen Corps council and CERT programs in the state estimated that as little as 20 percent of the registered councils were active, and the state subsequently removed more than half of its 40 councils from the national Web site. Officials in the other state said that the national database is not accurate and they have begun to send e-mails to or call local councils to verify the accuracy of registrations in their state. These officials said that they plan to follow up with those councils that do not respond, but they were not yet certain what they planned to do if the councils were no longer active. These results raise questions about the accuracy of FEMA’s data on the number of councils across the nation, and the accuracy of FEMA’s measure that registered councils cover 79 percent of the population nationwide. Some change in the number of active local programs can be expected, based on factors including changes in government leadership, voluntary participation by civic leaders, and financial support. FEMA officials told us that the Homeland Security Grant Program guidance designates state officials as responsible for approving initial council and CERT registrations and ensure that the data are updated as needed. According to FEMA officials, however, in practice this may not occur. Community Preparedness Division officials said that they do not monitor whether states are regularly updating local unit registration information. FEMA officials said that FEMA plans to adopt a new online registration process for Citizen Corps councils and CERTs in 2010, which will likely result in some programs being removed from FEMA’s registries. They said that FEMA expects to use the new registration process to collect more comprehensive data on membership and council activities. According to FEMA officials, updating initial registration information will continue to be the responsibility of state officials. The Citizen Corps Director noted that the Citizen Corps program does not have the ability to require all local units to update information, particularly councils or CERTS that receive no federal funding. According to the Fire Corps program Acting Director, a state advocacy program initiated in 2007 may help identify inactive programs as well as promote the Fire Corps program. As of September 2009, there were 53 advocates in 31 states. We will continue to assess this issue as part of our ongoing work. Currently, the Ready Campaign measures its performance based on measures such as materials distributed or PSAs shown. For example, according to a DHS official, in fiscal year 2008, the Ready Campaign had more than 99 million “hits” on its Web site, more than 12 million pieces of Ready Campaign literature requested or 43,660 calls to the toll-free call numbers. The Ready Campaign relies on these measures because it faces two different challenges determining whether its efforts are influencing individuals to be more prepared. First, the Ready Campaign is not positioned to control the when or where its preparedness message is viewed. Second, the Ready Campaign is not positioned to measure whether its message is changing the behavior of individuals. With regard to the Ready Campaign’s ability to control the distribution of its message, our prior work has shown that agencies whose programs rely on others to deliver services face challenges in targeting and measuring results in meeting ultimate goals, and when this occurs, agencies can use intermediate measures to gauge program activities. However, according to FEMA’s Acting Director for the Ready Campaign, funds are not available for the Ready Campaign to purchase radio and television time to air its PSAs; rather, the Ready Campaign relies on donations of various sources of media. As a result, the Ready Campaign does not control what, when, or where Ready Campaign materials are placed when the media is donated. For example, what PSA is shown and the slots (e.g., a specific channel at a specific time) that are donated by television, radio, and other media companies are not under the Ready Campaign’s control, and these are not always prime viewing or listening spots. Based on Ad Council data, the Ready Campaign’s PSAs in 2008 were aired about 5 percent or less of the time by English and Spanish television stations during prime time (8:00 pm to 10:59 p.m.), and about 25 percent of the PSAs were aired from 1:00 a.m. to 4:00 a.m. Similarly, about 47 percent of English radio and about 27 percent of Spanish radio spots were aired from midnight to 6:00 a.m. FEMA officials said that with the release of its September 2009 PSAs, they expect increased placement during hours where there are more viewers and listeners. Just as the Ready Campaign has no control over the time PSAs are aired, it does not control the type of media (e.g., radio and television) donated. Based on Ad Council data on the dollar value of media donated to show Ready Campaign materials (the value of the donated media is generally based on what it would cost the Ready Campaign if the media space were purchased), much of the value from donated media is based on space donated in the yellow pages. Figure 1 shows the value of various types of media donated to the Ready Campaign to distribute its message during 2008. The Ready Campaign also faces a challenge determining the extent to which it contributes to individuals taking action to become more prepared—the program’s goal. Measuring the Ready Campaign’s progress toward its goal is problematic because it can be difficult to isolate the specific effect of exposure to Ready Campaign materials on an individual’s level of emergency preparedness. Research indicates that there may be a number of factors that are involved in an individual taking action to become prepared, such as his or her beliefs as to vulnerability to disaster, geographic location, or income. A basic question in establishing whether the Ready Campaign is changing behavior is, first, determining the extent to which the Ready Campaign’s message has been received by the general population. The Ad Council conducts an annual survey to determine public awareness of the Ready Campaign, among other things. For example, in the Ad Council’s 2008 survey: When asked if they had heard of a Web site called Ready.gov that provides information about steps to take to prepare in the event of a natural disaster or terrorist attack, 21 percent of those surveyed said that they were aware of the Ready.gov Web site. When asked a similar question about television, radio, and print PSAs, 37 percent of those surveyed said that they have seen or heard at least one Ready Campaign PSA. Another factor is isolating the Ready Campaign’s message from other preparedness messages that individuals might have received. The Ad Council’s 2008 survey found that 30 percent of those surveyed identified the American Red Cross as the primary source of emergency preparedness information; 11 percent identified the Ad Council. While the Ad Council survey may give a general indication as to the population’s familiarity with the Ready Campaign, it does not provide a measure of preparedness actions taken based on the Ready Campaign’s promotion, that is, a clear link from the program to achieving program goals. The Ad Council reported that those who were aware of Ready Campaign’s advertising were significantly more likely to say that they had taken steps to prepare for disaster, but acknowledged that the Ready Campaign could not claim full credit for the differences. Further, as the 2009 Citizen Corps survey showed, the degree to which individuals are prepared may be less than indicated because preparedness drops substantially when more detailed questions about supplies are asked. We will continue to assess FEMA’s efforts to measure the performance of the Ready Campaign as part of our ongoing work. While DHS’s and FEMA’s strategic plans have incorporated efforts to promote community preparedness, FEMA has not developed a strategy encompassing how Citizen Corps, its partner programs, and the Ready Campaign are to operate within the context of the national preparedness system. An objective in DHS’s Strategic Plan for 2008-2013 to “Ensure Preparedness” envisions empowering Americans to take individual and community actions before and after disasters strike. Similarly, FEMA’s Strategic Plan for 2008-2013 envisions a strategy to “Lead the Nation’s efforts for greater personal and community responsibility for preparedness through public education and awareness, and community engagement and planning, including outreach to vulnerable populations.” FEMA’s Strategic Plan delegates to the agency’s components the responsibility for developing their own strategic plans, which are to include goals, objectives, and strategies. FEMA’s Strategic Plan states that the components’ strategic plans are to focus on identifying outcomes and measuring performance. NPD has not clearly articulated goals for FEMA’s community preparedness programs or a strategy to show how Citizen Corps, its partner programs, and the Ready Campaign are to achieve those goals within the context of the national preparedness system. In our past work, we reported that desirable characteristics of an effective national strategy include articulating the strategy’s purpose and goals; followed by subordinate objectives and specific activities to achieve results; and defining organizational roles, responsibilities, and coordination, including a discussion of resources needed to reach strategy goals. In April 2009, we reported that NPD had not developed a strategic plan that defines program roles and responsibilities, integration and coordination processes, and goals and performance measures for its programs. We reported that instead of a strategic plan, NPD officials stated that they used a draft annual operating plan and Post-Katrina Act provisions to guide NPD’s efforts. The draft operating plan identifies NPD goals and NPD subcomponents responsible for carrying out segments of the operating plan, including eight objectives identified for the Division under NPD’s goal to “enhance the preparedness of individuals, families, and special needs populations through awareness planning and training.” NPD’s objectives for meeting this goal do not describe desired outcomes. For example, one of NPD’s objectives for the Community Preparedness Division is to increase “the number of functions that CERTs will be able to perform effectively during emergency response,” but the plan does not describe how many and what type of functions CERTs currently perform, what additional functions they could perform, and what it means to be effective. NPD’s draft operating plan also does not include other key elements of an effective national strategy, such as how it will measure progress in meeting its goals and objectives; the roles and responsibilities of those who will be implementing specific programs within the Community Preparedness Division, such as Citizen Corps or Fire Corps; or potential costs and types of resources and investments needed to meet goals and objectives needed to implement civilian preparedness programs. As a result, NPD is unable to provide a picture of priorities or how adjustments might be made in view of resource constraints. In our April 2009 report we recommended that NPD take a more strategic approach to implementing the national preparedness system to include the development of a strategic plan that contains such key elements as goals, objectives, and how progress in achieving them will be measured. DHS concurred with our recommendation and, in commenting on our report, stated that it reported making progress in this area and is continuing to work to fully implement the recommendation. NPD officials stated in September 2009 that DHS, FEMA, and NPD, in coordination with national security staff, were discussing Homeland Security Presidential Directive 8 (National Preparedness), including the development of a preparedness strategy and an implementation strategy. They said that community and individual preparedness were key elements of those discussions. However, NPD officials did not state when the strategy will be completed; thus, it is not clear to what extent it will integrate Citizen Corps, its partner programs, and the Ready Campaign. NPD officials stated that work is under way on revising the target capabilities, which are to include specific outcomes, measures, and resources. NPD officials said that the draft for public comment is expected to be issued in fiscal year 2010. The Ready Campaign is also working to enhance its strategic direction. According to the FEMA Director of External Affairs, the Ready Campaign’s strategy is being revised to reflect the transition of the program from DHS’s Office of Public Affairs to FEMA’s Office of External Affairs, and the new FEMA Director’s approach to preparedness. Program officials said that the Ready Campaign will have increased access to staff and resources and is to be guided by a FEMA-wide strategic plan for external communications. As of September 2009 the plan was still being developed and no date has been set for completion. We will continue to monitor this issue as well FEMA’s effort to develop a strategy encompassing how Citizen Corps and its partner programs are to operate within the context of the national preparedness system. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other members of the subommittee may have. For further information about this testimony, please contact William O. Jenkins, Jr., Director, Homeland Security and Justice Issues, at (202) 512- 8777 or JenkinsWO@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Major contributors to this testimony included John Mortin, Assistant Director, and Monica Kelly, Analyst-in-Charge. Carla Brown, Qahira El’Amin, Lara Kaskie, Amanda Miller, Cristina Ruggiero- Mendoza, and Janet Temko made significant contributions to the work. Department of Homeland Security support for local community preparedness activities is provided through homeland security grants, specifically the Citizen Corps grant program, but community preparedness activities are also eligible for support under other homeland security grants. Citizen Corps grants are awarded to states based on a formula of 0.75 percent of the total amount available to each state (including the District of Columbia and the Commonwealth of Puerto Rico) and 0.25 percent of the total amount available for each U.S. territory, with the balance of funding being distributed on a population basis. For other DHS homeland security grants, a state prepares a request for funding, which can include support for the state’s community preparedness efforts, as allowed under the guidance for a particular grant. For example, the 2009 Homeland Security Grant Guidance lists “Conducting public education and outreach campaigns, including promoting individual, family and business emergency preparedness” as an allowable cost for state homeland security grants. Grant funding can be used to support Citizen Corps, Citizen Corps partner programs, or other state community preparedness priorities. The Federal Emergency Management Agency’s (FEMA) grant reporting database does not categorize grants in a way that allows identification of the amount of funding going to a particular community preparedness program. Table 1 summarizes the approximately $269 million in DHS grants that were identified by grantees as supporting community preparedness projects from fiscal years 2004 through 2008. The amount is an approximation because of limitations in identifying grants for such projects. Our selection of projects for inclusion relied on grantees identifying their projects under one of three predefined project types that FEMA officials said are relevant for community preparedness or were projects funded with a Citizen Corps program grant. Not all grantees may have used these descriptions. We worked with grant officials to identify the most appropriate grant selection criteria.
By preparing their families and property before an event, individuals can reduce a disaster's impact on them and their need for first responder assistance, particularly in the first 72 hours following a disaster. By law, the Federal Emergency Management Agency (FEMA), located in the Department of Homeland Security (DHS), is to develop a national preparedness system (NPS)--FEMA includes community preparedness programs as part of the NPS. FEMA's budget to operate these programs made up less than one half of 1 percent of its $7.9 billion budget for fiscal year 2009. These programs include the Citizen Corps program and its partner programs, such as Fire Corps, and rely on volunteers to coordinate efforts and assist first responders in local communities. DHS's Ready Campaign promotes preparedness through mass media. This testimony provides preliminary observations on (1) challenges FEMA faces in measuring the performance of Citizen Corps, its partner programs, and the Ready Campaign and (2) actions FEMA has taken to develop a strategy to encompass how Citizen Corps, its partner programs, and the Ready Campaign operate within the context of the NPS. This testimony is based on work conducted from February 2008 to October 2009. GAO analyzed documents, such as FEMA's strategic plan, and compared reported performance data with observations from 12 site visits, selected primarily based on the frequency of natural disasters. The results are not projectable, but provide local insights. FEMA faces challenges measuring performance for Citizen Corps, partner programs, and the Ready Campaign because it does not have a process to verify that data for its principal performance measure--the registered number of established volunteer organizations across the country--are accurate and the Ready Campaign is not positioned to control the distribution of its message or measure whether its message is changing individuals' behavior. FEMA faces challenges ensuring that the information needed to measure the number of established, active volunteer units is accurate. For example, officials representing 17 councils GAO contacted during its site visits stated that 12 were active and 5 were not. FEMA officials said that the new online registration process FEMA plans to adopt in 2010 will result in some programs being removed from FEMA's registries. They said that FEMA expects to use the new process to collect more comprehensive data on membership and council activities. FEMA counts requests for literature, Web site hits, and the number of television or radio announcements made to gauge performance for the Ready Campaign, but FEMA does not control when its message is viewed because it relies on donated media, such as air time for television and radio announcements. Because changes in behavior can result from a variety of factors, including other campaigns, it is difficult to measure the campaign's effect on changes in individuals' behavior. FEMA's challenges measuring the performance of community preparedness programs is compounded by the fact that it has not developed a strategy to encompass how Citizen Corps, its partner programs, and the Ready Campaign are to operate within the context of the NPS. In April 2009, GAO reported that FEMA's National Preparedness Directorate (NPD), which is responsible for community preparedness, had not developed a strategic plan. GAO reported that instead of a strategic plan, NPD officials stated that they used a draft annual operating plan and Post-Katrina Act provisions to guide NPD's efforts. However, the plan's objectives do not include key elements of a strategy, such as how NPD will measure its progress meeting goals and objectives or the potential costs and types of resources and investments needed. GAO recommended that NPD develop a strategic plan to implement the NPS that contains these key elements. FEMA concurred with GAO's recommendation and told GAO that it is taking actions to strengthen strategic planning. FEMA officials stated that they are reviewing implementation plans and policy documents, such as the National Preparedness Guidelines, and that community preparedness is a key element being considered in this process. FEMA has not set a date for completion of the National Preparedness System strategy, and the extent to which Citizen Corps, its partner programs, or the Ready Campaign will be included in the final strategy is not clear. GAO will continue to assess FEMA's efforts related to community preparedness programs as part of its ongoing work. FEMA provided technical comments on a draft of this testimony, which GAO incorporated as appropriate.
The shipbuilding industry in the United States is predominantly composed of three different types of shipyards: (1) privately owned shipyards that build naval vessels; (2) small privately owned shipyards that build commercial vessels; and (3) U.S. government-owned naval shipyards that conduct maintenance, repairs, and upgrades on Navy and Coast Guard vessels. As a result of consolidation, two major corporations—General Dynamics and Northrop Grumman—own most of the private shipyards that build Navy ships. General Dynamics owns Bath Iron Works in Bath, Maine; Electric Boat in Groton, Connecticut, and Quonset Point, Rhode Island; and NASSCO in San Diego, California. Northrop Grumman owns Northrop Grumman Shipbuilding–Gulf Coast with locations in Pascagoula, Mississippi, and New Orleans, Louisiana; and Northrop Grumman Shipbuilding–Newport News in Virginia. Some of these shipyards maintain additional support facilities in other locations to assist in production processes, such as Gulf Coast’s Gulfport, Mississippi facility that constructs lightweight ship components also known as composites. Along with these five major shipyards, there are two midsized shipyards that construct smaller Navy ships. Marinette Marine Corporation in Marinette, Wisconsin, is owned by the Italian shipbuilder Fincantieri, and Austal USA in Mobile, Alabama, is owned by Austal, which is headquartered in Western Australia. Figure 1 shows the location and the current product lines of each shipyard. Several of these shipyards have specialized production capabilities that constrain and dictate the types of vessels each can build and limit opportunities for competition within the shipbuilding sector. For instance, of the five major shipyards, only Newport News is capable of building nuclear-powered aircraft carriers, and only Newport News and Electric Boat have facilities for constructing nuclear submarines. Furthermore, of the five major shipyards, only NASSCO builds commercial ships alongside Navy ships. It typically builds Navy auxiliary ships, such as the T-AKE class of dry cargo / ammunition vessels, that share similarities with commercial ships, and, according to the shipbuilder, production processes and equipment are shared between the two types of projects. When the Navy contracts with these shipyards, it must follow provisions in the Federal Acquisition Regulation (FAR), which establishes uniform policies and procedures for acquisition by all executive agencies. In addition, the Cost Accounting Standards provide uniformity and consistency in cost accounting practices across government contracts. As a general policy under the FAR, contractors are usually required to furnish all facilities and equipment necessary to perform government contracts. However, in specific situations, including when it is clearly demonstrated that it is in the government’s best interest or when government requirements cannot otherwise be met, the government may provide government property to contractors to perform a contract. For example, as part of the DDG 1000 destroyer contract, the Navy included a requirement for Bath Iron Works to purchase unique equipment necessary to produce the DDG 1000. This equipment was acquired as government property because the equipment is unique to DDG 1000 construction and the contractor is unlikely to use it to perform another contract. When a contractor furnishes facilities and equipment to perform a contract, the government recognizes the costs associated with these items by paying depreciation and facilities capital cost of money costs allocated to the contract. Depreciation and facilities capital cost of money costs are indirect contract costs, or costs incurred for the general operation of the business that are not specifically applicable to one product line or contract. The FAR, in conjunction with the Cost Accounting Standards, includes provisions for how a contractor recovers costs such as depreciation and facilities capital cost of money as part of indirect contract costs allocated to government contracts. By recovering depreciation costs, the contractor recoups the cost of an asset—a facility or a piece of equipment—over the asset’s estimated useful life. Facilities capital cost of money acknowledges the opportunity cost for a contractor when it uses its funds to invest in facilities and equipment in lieu of other investments such as relatively risk-free bonds. Facilities capital cost of money is determined by multiplying the net book value of the contractor’s capital assets by a cost-of-money rate, which is a rate tied to the U.S. treasury rate. With respect to Navy shipbuilding, a shipyard’s indirect costs, including depreciation and facilities capital cost of money, are allocated to the Navy’s shipbuilding contracts at the shipyard in accordance with the Cost Accounting Standards. When a shipyard makes facilities and equipment investments, all ships under contract during the life of those assets are allocated a portion of the assets’ indirect costs. Therefore, if the number of ships under construction at a given time in a shipyard increases, the indirect costs per ship decrease, and if the number of ships under construction at a given time in a shipyard decreases, the indirect costs per ship increase. Over the last 10 years, major shipyards used public and corporate funds to invest more than $1.9 billion in facilities and equipment that improved shipbuilding efficiency, developed new capabilities, and maintained existing capabilities. Figure 2 shows the amount of money invested in each category. These categories are defined as follows: Improving shipbuilding efficiency. Investments in improving shipbuilding efficiency generally reduce the number of hours shipbuilders spend on a given task, and often allow shipbuilders to reorder the sequence of shipbuilding work to achieve new efficiencies. For example, investments in improving efficiency can make it possible for shipbuilders to complete more work in specially-designed workshops and modular assembly buildings, thus having to complete less of the work later on in the shipbuilding process inside the more constrained environments of almost-completed areas of the ship. To illustrate how these investments improve efficiency, shipyard officials often describe the “1-3-8 rule of thumb” of shipbuilding work: work that takes 1 hour to complete in a workshop, takes 3 hours to complete once the steel panels have been welded into units (sometimes called modules), and 8 hours to complete after a block has been erected or after the ship has been launched. For example, inside the recently-constructed Ultra Hall at Bath Iron Works, shipbuilders can now access work spaces more easily in a climate- controlled environment allowing them to finish units at a higher stage of completion before they are erected and then moved into the water. Figure water. Figure 3 is a photo of a unit being moved out of the Ultra Hall. 3 is a photo of a unit being moved out of the Ultra Hall. Developing new capabilities. Shipyards make investments to develop new capabilities so that they can complete new types of tasks. In some cases, shipyards need these new capabilities to meet the Navy’s technical requirements for new ships. For example, to build a newly-designed aircraft carrier with heavier metal plate requirements than those of previous aircraft carriers, Newport News invested in new facilities and equipment. These investments included building a heavy-plate facility, and upgrading a crane to make it capable of lifting heavier modules. Other shipyards also identified purchasing cranes as examples of investments to develop new capabilities. Maintaining capabilities. From time to time, shipyards make major investments to replace or repair facilities and equipment. This allows the shipyards to maintain existing capabilities for years or decades. For example, Electric Boat officials explained that its shipyard had to make a major investment in dock repair in order to maintain the shipyard’s ability to launch and repair submarines. Through investments to improve efficiencies and develop new capabilities, major shipyards modernized their facilities and equipment, thus transforming their shipbuilding processes. Some of these investments completely changed the physical layouts of shipyards. For example, Bath Iron Works completed a Land Level Transfer Facility in 2001, replacing an inclined-way transfer facility used since 1890. Bath Iron Works officials explained that with the Land Level Transfer Facility, the shipyard now has the capability to construct ships in larger, more fully outfitted units on any one of three construction lanes. The shipyard also has a floating dry dock that it can move to any of the three construction lanes to transfer the ship into the water. With this arrangement, the shipyard can better manage when a ship is ready to be moved to the water. Another example includes NASSCO’s facility expansion project, which fundamentally changed the layout of the shipyard to increase production capacity, throughput, and efficiency. In particular, NASSCO added new production lanes to reduce shipyard congestion, allowing builders to move units around the shipyard with reduced bottlenecks, and added a modern blast and paint facility to improve paint process efficiency while reducing emissions. Finally, Newport News built a new pier, thus increasing its capacity for servicing and completing construction of aircraft carriers. Table 1 shows selected investments at each major shipyard, sometimes funded through public or corporate funds, over the last 10 years. These selected investments highlight examples of projects by investment category as well as the magnitude of some investments at shipyards. Two midsized shipyards, Austal USA and Marinette Marine Corporation, started construction of two different designs of the Littoral Combat Ship for the Navy in 2005, and their investments have focused primarily on maintaining shipyard capabilities and developing new capabilities in order to compete for Navy contracts. Austal USA used both public and corporate money to complete investments of approximately $155 million in facilities and equipment since 1999, and Austal USA officials said these investments were mostly to develop new capabilities to compete for Navy business. For example, Austal USA officials said that to develop the capacity to work on new Navy ships, their shipyard invested approximately $85 million to build the Modular Manufacturing Facility. Shipyard officials said that with this facility, the yard constructs ships in a modular fashion to maximize productivity, efficiency, and throughput. Marinette Marine Corporation officials stated that investments over the last 10 years have largely been to maintain capabilities, but the shipyard’s new owner, Fincantieri, plans to make significant investments in the future. To incentivize investments, the Navy has provided support to most major shipyards with four mechanisms: early release of contract retentions, accelerated depreciation, special contract-incentive fees, and contract share-line adjustments. However, the Navy has not incentivized investments at the two midsized shipyards. Navy officials cited the lack of competition and instability of Navy work in shipbuilding as major reasons why the Navy needs to incentivize investments in facilities and equipment at major shipyards. At the shipyards, officials argued that they cannot secure corporate support for many investments without Navy incentives. Shipyard officials also pointed to instability in the Navy’s long-range shipbuilding plans as a reason their shipyards usually do not pursue investments without Navy support. Over the last 10 years, the Navy has expanded its use of investment incentives and is now involved with providing some form of investment support at all major shipyards. The Navy has provided support to most major shipyards with four types of investment incentives: early release of contract retentions, accelerated depreciation, special contract-incentive fees, and share-line adjustment. Early release of contract retentions. By releasing contract retentions early, the Navy disburses money to a shipyard earlier than scheduled from a reserve normally retained to ensure ships are delivered according to specifications. For example, instead of holding 3.75 percent of the contract payments in retentions, the Navy might hold only 1.5 percent of the contract payments, releasing the remaining 2.25 percent early to a shipyard in exchange for the shipyard investing in facilities or equipment. Navy officials said that with this incentive, the Navy does not provide additional funds to the shipyard, but rather provides funds to the contractor it would receive anyway upon successful completion of the contract. Shipbuilders said the early release of contract retentions provides funds with which the shipyard can make investments that it might otherwise not be able to make. The early release of contract retentions may fund the entire capital investment or a portion of the investment. Accelerated depreciation. When accelerating depreciation, the Navy pays the shipyard higher payments for depreciation of an asset over a shortened timeline than under a normal depreciation payment schedule. In exchange, the shipyard agrees to fund the investment. This benefits the shipyard because it recoups its investment faster than it would have under a normal depreciation schedule. For example, if a shipyard asset has a useful life of 9 years, the shipyard recoups a portion of the investment each year over that span. However, if an incentive agreement accelerated the depreciation schedule, the shipyard would receive larger payments earlier and over fewer years. Navy and shipbuilding officials explained that this kind of incentive can help bridge a gap between an investment’s expected rate of return and the corporation’s desired rate of return to help justify making an investment. See table 2 for a comparison of normal and accelerated straight-line depreciation. Special contract-incentive fee. While incentive fees are used in contracts across the Department of Defense generally to motivate contractor efforts, the Navy also uses special contract-incentive fees to specifically encourage investments in facilities and equipment. On a contract that includes such a special incentive fee, a shipyard may earn a fee for making an investment. This special fee is available to the shipyard only if it agrees to make a Navy-approved investment. The special fee may pay for all or part of the investment. In some cases, the incentive bridges the difference between the corporation’s desired rate of return and the projected return on an investment. Contract Share-Line Adjustment. The contract share-line defines what share of underruns or overruns will accrue to the contractor and the Navy. By adjusting the contract share-line ratio, the Navy can incentivize a contractor to invest in facilities or equipment that will reduce costs. For example, during original contract negotiations for a fixed-price incentive or cost-plus incentive contract, the two parties may agree to an even share of the savings if the total negotiated or allowable cost ends up being less than the total target cost. Through a contract modification, the Navy could change the original sharing ratio so that more of the savings are given to the contractor. Under this modification, the contractor is incentivized to invest in a facility or equipment that may reduce costs so that it earns a higher fee. The Navy will benefit from these lower costs on all future contracts. See figure 4 for an example of a share line adjustment. The Navy also manages Hurricane Katrina relief funds, which Congress appropriated for infrastructure improvements at shipyards that build Navy ships in states affected by Hurricane Katrina. This support differs from incentive programs at other shipyards because it is direct federal funding and is not tied to a specific Navy shipbuilding program. These funds were not directed to repairing specific damage from the hurricane, but can be used for a variety of projects at eligible shipyards. Table 3 provides an overview of investment incentive mechanisms and how the Navy has used each incentive to support investments at shipyards. Appendix II includes additional details of the investment incentives at each shipyard. The Navy has not negotiated investment incentives at the two midsized shipyards, Austal USA and Marinette Marine Corporation, which are both competing for the Littoral Combat Ship contract, though both received other forms of federal government support for facilities and equipment investments. Both shipyards received grants from the U.S. Department of Transportation’s Maritime Administration, which are available to small shipyards for capital and related improvements that foster efficiency and competitive operations. For example, Marinette Marine Corporation officials said that their shipyard received $1.4 million to help finance investments for new cranes. In addition, Austal received almost $34 million of federal Hurricane Katrina funds to help finance its Modular Manufacturing Facility. Both midsized shipyards have plans for further expansions, but as of now, neither shipyard plans to request Navy investment incentives to execute these plans. Navy officials, shipyard officials, and corporate officials from Northrop Grumman and General Dynamics provided different perspectives on reasons for using incentives to encourage investment in the Navy shipbuilding market. Navy officials told us that the Navy negotiates investment incentives with major shipyards because limited competition in the market does not foster an environment that encourages shipyards to invest without incentives. For example, Newport News is the only shipyard capable of building aircraft carriers. A Navy contracting officer said that, as a result, there may be a disincentive for Newport News to invest in projects that improve efficiency. Generally speaking, at contract negotiation, the government’s proposed contractor fee is based on a percentage of total estimated allowable contract costs, with the percentage reflecting various weighted risk factors. Newport News, as a sole supplier, will likely construct all future aircraft carriers but could earn a lower fee if new efficiencies reduce the total cost of construction. Even in cases where there is limited shipbuilding competition, such as with surface combatants, shipyards may face similar disincentives to invest. If the shipyard invests to improve efficiency, these investments will likely reduce the price of a ship and can lower future profits. However, where some competition exists, better efficiency may lead to winning a greater allocation of future work. Navy officials added that shipyards that are not confident Navy work will materialize or be funded as scheduled are reluctant to make capital investments without government incentives. Officials from major shipyards argued that instability in long-range Navy shipbuilding plans discourages shipyards from making investments without guaranteed Navy work. Because major shipyards generally do not perform commercial work, there are few other inducements to invest in new facilities and equipment other than Navy shipbuilding opportunities. For example, at one shipyard, an official explained that it had invested in a facility in anticipation of an upcoming contract. The Navy changed the shipbuilding program and did not award the contract, rendering this facility underutilized until receipt of another contract several years later. The official emphasized that this shipyard will never invest again in new facilities without a signed contract guaranteeing future work. The official added that to do otherwise would not be a prudent business decision. Officials from major shipyards also argued that their shipyards need Navy incentives because many potential investments in facilities and equipment do not meet the corporation’s desired rate of return. In addition, some shipyard officials stated that since they cannot secure corporate investments for many projects, they often looked first for state or federal support for new investments to help to bridge the gap between their corporation’s desired rate of return and the expected rate of return of the investment. Corporate officials argued that the low rates of shipbuilding production, low shipbuilding fees relative to invested capital, and length of time it takes to build a ship sometimes mean investments take too long to generate an acceptable return, or will never generate an acceptable return. Moreover, officials stated that shipyard investments are large, sometimes exceeding $25 million for a single investment. Furthermore, other sectors of these corporations are often better positioned than shipyards to propose investments that achieve the corporation’s desired rate of return because these sectors can use less expensive investments to improve processes for high-volume products. Corporate officials agreed that corporations would generally make investments in maintaining capabilities without meeting a corporation’s desired rate of return because these investments are necessary to stay in business. Over the past 10 years, the Navy has moved from providing onetime support of major capital investments to more routine support of investment spending at all five major shipyards. In 2000, Bath Iron Works was in the process of completing construction of its Land Level Transfer Facility, which was an investment that the Navy incentivized through early release of contract retentions. Since then, the Navy has used investment incentives to facilitate facilities and equipment investments at four of the five major shipyards, across multiple shipbuilding programs. At the fifth major shipyard, Gulf Coast, the Navy administered Hurricane Katrina recovery money to support investments. Since 2007, the Navy has actively supported investments at all major shipyards with investment incentives or Hurricane Katrina recovery funding. Figure 5 shows the Navy’s expanded support to private shipyards over the last 10 years. Senior-level Navy officials stated that negotiating facilities and equipment incentives are becoming a routine part of contract negotiations, but officials expressed different opinions over which mechanisms are most useful. While the Navy has used early release of retentions and accelerated depreciation throughout the past 10 years, it has recently started to negotiate special contract-incentive fees during contract negotiation as a part of its cost-control strategy, such as during the Virginia-class submarine Block II and Block III contract negotiations. Senior-level Navy officials have differing views on whether it is better to include incentives as part of a contract or to negotiate after the Navy awards a contract. One contracting officer observed that the length of time involved in obtaining the required Cost Accounting Standards Board waiver for accelerated depreciation may have led officials to pursue other investment incentives. Potential exists that contractors may ask the Navy and other services to expand the scope of current incentive activities. Shipyards have already started to request incentives for a variety of projects outside of investments in facilities and equipment, and a shipyard recently requested funding assistance for lean six-sigma process-improvement training. In addition, the T-AKE contract includes a cost-reduction initiative in which the Navy paid for projects that reduced costs through design and producibility improvements, but did not require new investments in facilities or equipment. Moreover, we were told by one company that corporate divisions supporting other government-related product lines have expressed interest in these types of facilities and equipment incentives. The Navy does not have a policy outlining its goals and objectives for providing financial incentives to shipyards to encourage facilities and equipment investments. Without such a policy, the Navy has not identified if there is a minimum return on investment expected for this support and the kinds of investments that are in the best interest of the Navy to support. The Navy has also not considered the extent to which investment incentives affect depreciation and facilities capital cost of money at shipyards. Navy officials also lack guidance on how to validate outcomes and safeguard financial interests, thus resulting in varying approaches across programs. In a 2008 report to Congress, the Navy recognized a need to clarify its priorities and objectives for supporting investments at shipyards, but has not yet developed this clarifying guidance. Navy officials stated that program offices and contracting officers negotiate incentives on a program-by-program basis and there is no guidance on which investment mechanism is appropriate under which circumstances. Use of special contract incentives fees is becoming common, yet some Navy officials suggested that adjustments in contract terms such as a share-line adjustment provide a strong incentive for successful program implementation. While reducing cost is the goal of many facilities and equipment investment incentives, the Navy does not define a metric or minimum desired level for these reductions in cost. This results in differences in expected outcomes across investment mechanisms. Table 4 highlights variations in the types of expected outcomes with examples by shipyard, investment, and investment mechanism. Given the variation in the expected outcomes, it is difficult to ascertain if the Navy has a minimum return it expects to receive by providing financial support or if just any return is sufficient. the Contractor shall be eligible to receive a special incentive based upon the Contractor and/or Major Subcontractor Newport News Shipbuilding investing in such projects that result in savings to the Government for the submarines under this contract and long term savings to the Government for the Virginia Class submarine program. As a result of this contract language, the contractors are not required to include return on investment calculations, calculate the net present value of savings on future submarines, or consider the share-line ratio to calculate actual savings to the government. Reviewing officials stated that even when contractors included return-on-investment calculations in the business cases, the officials did not review it because such calculations were not required in the contract language. The contracting officer responsible for managing Virginia-class submarine CAPEX stated that this contract language is too vague concerning when to approve or disapprove a project based on estimated savings, and if a similar incentive is used again, the contract should include criteria for when to approve or disapprove a project. To illustrate, the contracting officer stated that a contractor submitted a business case under Block II CAPEX for a project expected to cost $4 million with $10,000 in expected savings on Block II submarines and additional saving accruing on future submarines beyond Block II. The contracting officer stated that the Navy did not approve the project because the expected savings on Block II were so low, but such a decision was difficult to support based on the contract language. Individual program offices and contracting officers also make decisions about which types of investments to pursue, without any policy from the Navy about the kinds of investments that are in its best interest. Most of the investments the Navy supported fall into the category of improving efficiency at the shipyards. Some of the more recent investments, however, could also be considered as maintaining or developing capabilities at the shipyards. It is unclear whether or not the Navy has determined that these investments are in fact in its best interest. For example, according to officials, the Virginia-class submarine Block II and Block III clauses do not prohibit approving maintenance projects as long as these projects generate cost savings. In 2009, the Navy paid a special contract-incentive fee to Electric Boat to refurbish equipment past its normal service life in order to prevent major failures that would result in an injury or equipment damage and affect production schedules. In a similar manner, Newport News submitted a business case to receive a special contract-incentive fee to support repairs to its foundry, stating that near-term investment was necessary because the average age of most of the equipment is well past its average useful service life and at a high risk of mechanical failure. The Navy did not approve this business case under the Block II special contract incentive fee because Newport News was unable to demonstrate savings on the Block II submarines, a stipulation in the contract language. However, the Navy encouraged the shipyard to resubmit the proposal if it could demonstrate savings on future submarine construction. Such investments to maintain capabilities are likely to generate some cost savings and may better position the shipyards to increase submarine production rates, but some officials indicated that such investments should actually be contractor responsibilities. The Navy also lacks policy on how to determine an incentivized investment’s effect on indirect costs to the Navy. As the Navy is incentivizing investments up front, it is unclear whether contractors should be able to recover indirect costs associated with these assets through depreciation and facilities capital cost of money. While the Navy did not allow the contractor to recover depreciation and facilities capital cost of money for investments supported with Hurricane Katrina funds, some agreements explicitly provide that the contractor can recover costs for incentivized facilities and equipment investments. However, Defense Contract Audit Agency officials questioned a facilities capital cost of money claim that one shipyard included in its indirect costs because the Navy provided an incentive to construct the facility. Nonetheless, officials concluded that the contractor could recover these costs from the Navy because it was unclear in the terms of the contract, and neither the Federal Acquisition Regulation nor the Cost Accounting Standards address recovery of facilities capital cost of money for facilities receiving incentive support. Defense Contract Audit Agency officials stated that they believe it is unfair that contractors can recover facilities capital cost of money costs on incentivized facilities and this issue needs to be reevaluated if the Navy continues to incentivize investments. In instances where the incentive agreement explicitly states that the contractor can recover these long-term costs, officials evaluating business cases stated that they do not always consider these long-term costs when comparing the cost of the project with potential savings. Specifically, Navy officials stated that they did not consider the effect of depreciation when evaluating Virginia-class submarine Block II CAPEX projects. In the absence of Navy guidance, approaches vary by investment incentive for validating whether or not a project achieves expected outcomes and safeguarding Navy financial interest if a project does not achieve expected outcomes. Some investment incentives require validation of anticipated savings whereas others only require a validation of project construction milestones. For example, officials described a lengthy review of savings validation associated with the first Virginia-class submarine CAPEX Block II project, but later indicated the process has evolved over time and other validations have been more straightforward. According to Navy officials managing the Virginia-class CAPEX incentive, the contract provides little guidance on how to validate outcomes, so program officials developed the current validation process after the contract was signed. However, the CVN 21 program office did not validate anticipated savings after investments were complete, but validated investments based on construction milestones. Because the Navy negotiated a lower target cost for the future carrier, Navy officials stated that it is not necessary to validate the savings associated with these projects. These officials added that it would be difficult to calculate an accurate baseline against which to compare labor hours with and without the new investments because the new carrier had never been constructed. In the absence of a Navy policy, program and contracting officials also negotiate various methods to safeguard the Navy’s financial interest in the event that expected outcomes for the investment incentive are not achieved. The range of methods is seen in table 5. In addition to variation in the types of safeguards used across incentive mechanisms, the Navy has used the same investment mechanism—early release of contract retentions—for two different programs, but the safeguarding mechanism differed. The Navy modified the terms of the DDG 51 contract by negotiating changes to target price as a safeguard when it agreed to support the Ultra Hall investment through early release of contract retentions and payment of a special contract-incentive fee. In comparison, when the Navy agreed to an early release of contract retentions to support the facilities expansion project at NASSCO, program officials stated that the Navy did not renegotiate the terms of the T-AKE contract. In both instances, officials stated that the maturity of the DDG 51 and T-AKE programs were factors in deciding to release contract retentions early; the Navy awarded Bath Iron Works the first DDG 51 destroyer contract in 1985 and NASSCO started construction of the T-AKE class in 2003. Over the past 10 years, the Navy has expanded the use of investment incentives to encourage shipyards to make investments that may reduce costs of future ships. In a 2008 report to Congress, the Navy acknowledged a need to clarify its priorities and objectives for providing investment incentives to shipyards. However, the Navy has yet to do this, and the absence of policy leaves the overall goals and intended outcomes of this support unclear. Decisions about when a particular incentive should be chosen, what returns are acceptable across programs, and what types of investments the Navy should support are made on a case-by-case basis without guidance. Also, it is unclear whether or not contractors should be able to claim recovery for certain indirect costs related to assets supported by incentive mechanisms. Further, given the absence of policy, inconsistencies exist regarding the importance attached to validating outcomes and how to safeguard the Navy’s financial support in the event that the expected outcome is not achieved. We recommend that the Secretary of Defense direct the Secretary of the Navy to develop a policy that identifies the intended goals and objectives of investment incentives, criteria for using incentives, and methods for validating outcomes. The Department of Defense agreed with our recommendation to develop a policy that identifies the intended goals and objectives of investment incentives, criteria for using incentives, and methods for validating outcomes. The department stated that the Navy intends to include guidance for program managers and contracting officers in a Navy best- practices guidebook. The department’s written comments can be found in appendix III of this report. The department also provided technical comments, which were incorporated into the report as appropriate. We are sending copies of this report to interested congressional committees, the Secretary of Defense, and the Secretary of the Navy. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or martinb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To identify facilities and equipment investments over the last 10 years, we obtained and analyzed data on all capital investments over $1 million at all major, privately owned shipyards including General Dynamics Bath Iron Works, General Dynamics NASSCO, General Dynamics Electric Boat, Northrop Grumman Shipbuilding–Gulf Coast, Northrop Grumman Shipbuilding–Newport News, and two smaller, privately owned shipyards, Austal USA and Marinette Marine shipyard. We supplemented our analysis of the data by interviewing officials at each shipyard to obtain an understanding of the purpose of these investments. We then categorized the investments at major shipyards into three groups, and shipyard officials confirmed our categorization of the investments. In our analysis we excluded some investments such as investments that exclusively supported nuclear aircraft carrier and submarine refuelings, modernizations, and service life extensions programs. We also excluded information-technology investments and annual operating capital. To assess the reliability of each shipyard’s data, we interviewed knowledgeable shipyard officials about the data and confirmed that the data are subject to external audits. We determined that the data were sufficiently reliable for the purposes of this report. We also interviewed officials at each shipyard’s Supervisor of Shipbuilding, Conversion, and Repair to understand investments over the past 10 years and how those investments may have affected each shipyard’s work flow and processes. We also interviewed relevant Defense Contract Audit Agency officials at major private shipyards. To determine the role the Navy had in facilities and equipment investments at privately owned shipyards, we reviewed shipbuilding contracts, legislation making funds available for shipyards affected by Hurricane Katrina, and Deputy Assistant Secretary of the Navy for Ship Programs reports to Congress regarding capital-investment strategies at shipyards. To assist with identifying when the Navy has provided support for facilities and equipment investments, we held discussions with: the CVN 21 program office; DDG 51 program office; Joint High Speed Vessel program office; T-AKE program office; Virginia-class submarine program office; Program Executive Office, Ships; Supervisor of Shipbuilding, Conversion, and Repair (Bath, Groton, Gulf Coast, and Newport News); and Naval Sea Systems Command–Contracts. After identifying which mechanisms the Navy uses to provide support to shipyards for facilities and equipment investments and when these investments were used, we analyzed the data to determine any trends over the past 10 years. To supplement this analysis, we met with officials from the Office of the Deputy Assistant Secretary of the Navy–Ships, the Office of the Deputy Assistant Secretary of the Navy–Acquisition and Logistics Management, and the Office of the Secretary of Defense–Industrial Policy to understand how the Navy’s role in investment support at shipyards has evolved over the past 10 years. We also met with officials from General Dynamics Marine Systems and Northrop Grumman Shipbuilding to understand their corporate processes for when to make facilities and equipment investments and how the Navy’s support is considered during that process. To evaluate how the Navy ensures its role in facilities and equipment investments results in expected outcomes, we reviewed shipyard business- case analyses and accompanying documents for Navy-supported projects and analyzed approaches across programs to identify differences and presence of formal validation of attainment of expected benefits. We supplemented this analysis with interviews of officials responsible for managing investment incentives including the CVN 21 program office; T- AKE program office; Virginia-class submarine program office; Program Executive Office, Ships; Supervisor of Shipbuilding, Conversion, and Repair (Bath, Groton, Gulf Coast, and Newport News). We conducted this performance audit from October 2009 to July 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Austal is an Australian-based company with a U.S. location in Mobile, Alabama. Austal USA is the Navy’s prime contractor for the Joint High Speed Vessel and teamed with General Dynamics Bath Iron Works for construction of the Littoral Combat Ship. The Navy has contracted with Austal USA for three Joint High Speed Vessels and an option for seven more. The Navy has also contracted with General Dynamics Bath Iron Works for two Littoral Combat Ships built at Austal USA shipyard. Austal is currently competing as the prime contractor for the next 10 Littoral Combat Ships. In June 2006, Congress enacted the Emergency Supplemental Appropriations Act for Defense, the Global War on Terror, and Hurricane Recovery, 2006, which included funding for infrastructure improvements at Gulf Coast shipyards that had existing Navy shipbuilding contracts and were damaged by Hurricane Katrina. Following this legislation, the Assistant Secretary of the Navy for Research, Development and Acquisition issued a memorandum that outlined goals for competitively awarding the funding, provided general instructions for how contractors should develop business cases supporting funding requests, and established a panel to review contractor proposals for funding. The panel awarded Austal USA a contract supporting construction of the Modular Manufacturing Facility. Disbursement of funds from the Navy to Austal USA was based upon completion of predetermined construction milestones. Bath Iron Works operates facilities principally in Bath, Maine, and has support facilities in Brunswick, Maine. 1995 (General Dynamics) Bath Iron Works builds surface combatants including DDG 51 and DDG 1000. The Navy used early release of retentions to help support the Bath Iron Works investments in a Land Level Transfer Facility. The Navy supported Ultra Hall construction by modifying the terms of the DDG 51 contract and adding three incentive mechanisms. As part of the incentives, the Navy also negotiated a reduced maximum price for each DDG 51 ship. By releasing retentions early, corporate and shipyard officials stated that the Navy helped Bath Iron Works, and its corporate owner General Dynamics, avoid negative cash flows during construction, a primary objective of the shipyard and corporate owner. The addition of a special contract-incentive fee gave Bath Iron Works an opportunity to earn additional profit by investing in the facility. By changing the incentive fee structure, the Navy also gave Bath Iron Works an incentive to achieve savings. Electric Boat operates two facilities in Groton, Connecticut, and Quonset Point, Rhode Island. 1952 (General Dynamics) General Dynamics Electric Boat is the Navy’s prime contractor for Virginia-class submarines. Through a teaming agreement, Electric Boat and Northrop Grumman Shipbuilding–Newport News work together to build the submarines. Each contractor is responsible for building designated sections and modules, and the contractors alternate final assembly, outfitting, and delivery. To date, the Navy has contracted to purchase submarines in three blocks. Block I includes four submarines, Block II includes six submarines, and Block III includes eight submarines. In 2000, the Navy agreed to accelerate depreciation on five investments over the course of the Virginia-class Block I contract. In 2004, Electric Boat initiated funding long-term repair of three graving docks. The Navy agreed to accelerate depreciation of the long-term repairs to 16 years rather than over the docks’ entire useful life, expected to be over 30 years. The Virginia-class submarine Block II and Block III contracts include special incentives to reward the contractor if it develops more efficient and cost-effective practices that contribute to the production of more affordable submarines. On both contracts, the contractor can claim a special incentive for investing in facilities and process-improvement projects. Since the submarines are built at both Electric Boat and Newport News, both contractors can claim the incentive under these contracts. Under the Block II contract, the contractor submits a business-case analysis to the Supervisor of Shipbuilding, Groton. Within 30 days after approval by the Supervisor of Shipbuilding and start of the project, the Navy pays the contractor a special incentive not to exceed 50 percent of the estimated investment cost. After the contractor successfully implements the project as defined in the business-case analysis, the Navy pays the contractor another special incentive not to exceed 50 percent of the original estimated investment cost. The sum of the two incentive payments cannot exceed 100 percent of the approved business-case analysis estimated investment cost. During the Block III contract negotiations, Newport News and Electric Boat proposed facilities and equipment investments, and savings from these investments were included in the target cost. For these investments, the contractor submits a business case to claim a special incentive fee tied to the first four submarines for the amount necessary to achieve the documented corporate minimum return on investment. To claim a special incentive fee for the last four submarines on the Block III contract, the process mirrors the process under Block II. For these projects, the incentive amount can equal up to 100 percent of the approved business- case analysis estimated investment cost. Marinette Marine Corporation is located in Marinette, Wisconsin. 2008 (Fincantieri) The Navy has contracted with Lockheed Martin for two Littoral Combat Ships built at Marinette Marine shipyard. The Navy is currently holding a competition for remaining Littoral Combat Ships. NASSCO operates in San Diego, California. 1998 (General Dynamics) NASSCO builds auxiliary ships including the T-AKE for Navy sealift operations. In recent history, NASSCO’s work has been divided approximately as follows: 60 percent new construction for the Navy, 20 percent repair work, and 20 percent new commercial construction. NASSCO is the only major private shipyard to perform commercial work along with Navy shipbuilding. The Navy used early release of contract retentions to incentivize investments at NASSCO three times over the last 10 years. In 2001, the Navy released retentions early to support the acquisition of new cranes. In 2006 and 2008, the Navy released retentions early to support investments at NASSCO, including some support for investments that were part of NASSCO’s facility expansion project. These investments included projects to modernize the preoutfitting facilities such as expanding the M-Lane, improving stage of construction 4 activities, and constructing a new blast and paint facility. By releasing retentions early, the Navy helped NASSCO maintain a positive cash flow while the shipyard made new investments, NASSCO officials said. Northrop Grumman Shipbuilding–Gulf Coast operates in Pascagoula, Mississippi, and New Orleans, Louisiana, with other support facilities. 2001 (Northrop Grumman) Surface combatants, amphibious assault ships, auxiliary ships, and Coast Guard patrol boats (cutters) Northrop Grumman Shipbuilding–Gulf Coast builds DDG 51 surface combatants and the hangar, rear Peripheral Vertical Launching System, and the composite deckhouse for DDG 1000 surface combatants. It is also the prime contractor for the LPD 17 amphibious transport ship and the LHA 6 amphibious assault ship. In June 2006, Congress enacted the Emergency Supplemental Appropriations Act for Defense, the Global War on Terror, and Hurricane Recovery, 2006, which included funding for infrastructure improvements at Gulf Coast shipyards that had existing Navy shipbuilding contracts and were damaged by Hurricane Katrina. Following this legislation, the Assistant Secretary of the Navy for Research, Development and Acquisition issued a memorandum that outlined goals for awarding the funding, provided general instructions for how contractors should develop business cases supporting funding requests, and established a panel to review contractor proposals for funding. Northrop Grumman Shipbuilding–Gulf Coast submitted several proposals for review and the panel awarded this shipyard one contract supporting two separate investments, with an option for a third. The contract includes funding to support purchasing equipment for a panel line at the Pascagoula, Mississippi, shipyard, an option for funding to support equipment for a panel line at the New Orleans, Louisiana, shipyard, and special tooling for the composite manufacturing facility in Gulfport, Mississippi. Disbursement of funds from the Navy to Northrop Grumman Shipbuilding–Gulf Coast is based upon completion of predetermined construction milestones. To date, the Navy has expended 100 percent of funding on the contract for the Pascagoula panel line, 0 percent of funding on the contract for the Avondale panel line, and approximately 90 percent of funding on the contract for the composite manufacturing facility. Navy officials stated that funding for the Avondale panel line is contingent upon Northrop Grumman Shipbuilding–Gulf Coast demonstrating returns on the panel line in Pascagoula, Mississippi. Northrop Grumman Shipbuilding–Newport News operates in Newport News, Virginia. 2001 (Northrop Grumman) Northrop Grumman Shipbuilding–Newport News is the Navy’s prime contractor for aircraft carriers and refueling and complex overhauls. Newport News is currently constructing CVN 78, the lead ship of the new CVN 21 class of nuclear-powered aircraft carriers. Through a teaming agreement, Northrop Grumman Shipbuilding–Newport News also works with General Dynamics Electric Boat to build the Virginia-class submarines. Each contractor is responsible for building designated sections and modules, and the contractors alternate final assembly, outfitting, and delivery. To date, the Navy has contracted to purchase submarines in three blocks. Block I includes four submarines, Block II includes six submarines, and Block III includes eight submarines. In 2003, the Navy and Newport News signed a memorandum of agreement to accelerate depreciation of a new pier, known as Pier 3. Before construction of Pier 3, Newport News had one pier where it could perform work on aircraft carriers. This pier was in use for almost 60 years and Newport News was planning to replace it in 2012. Due to a Navy scheduling conflict, Newport News was going to have two aircraft carriers that needed to be at this pier at the same time in fiscal year 2007. To address the scheduling conflict, the Navy agreed to accelerate depreciation of the new pier if Newport News accelerated its planned timeline to construct the pier. Under this agreement, Newport News is allowed to depreciate the pier over 7 years rather than over the estimated useful life of the pier, expected to be 40 years. Virginia-class submarine. The Virginia-class submarine Block II and Block III contracts include special incentives to reward the contractor if it develops more efficient and cost-effective practices that contribute to the production of more affordable submarines. On both contracts, the contractor can claim a special incentive for investing in facilities and process-improvement projects. Since the submarines are built at both Electric Boat and Newport News, both contractors can claim the incentive under these contracts. Under the Block II contract, the contractor submits a business-case analysis to the Supervisor of Shipbuilding, Groton. Within 30 days after approval by the Supervisor of Shipbuilding and start of the project, the Navy pays the contractor a special incentive not to exceed 50 percent of the estimated investment cost. After the contractor successfully implements the project as defined in the business-case analysis, the Navy pays the contractor another special incentive not to exceed 50 percent of the original estimated investment cost. The sum of the two incentive payments cannot exceed 100 percent of the approved business-case analysis estimated investment cost. During the Block III contract negotiations, Newport News and Electric Boat proposed facilities and equipment investments, and savings from these investments were included in the target cost of the contract. For these investments, the contractor submits a business case to claim a special incentive fee tied to the first four submarines for the amount necessary to achieve the documented corporate minimum return on investment. To claim a special incentive fee for the last four submarines on the Block III contract, the process mirrors the process under Block II. For these projects, the incentive amount can equal up to 100 percent of the approved business-case analysis estimated investment cost. CVN 78 Construction-Preparation Contract. The CVN 78 construction- preparation contract includes a special contract-incentive fee available to Newport News if it invests in 10 facilities identified during contract negotiations as investments that would contribute to reducing the construction cost of CVN 21 aircraft carriers. The special contract incentive fee for each facility is a portion of the total cost of the facility. The Navy pays the special incentive fee for each facility based upon Newport News’s progress constructing the facility. Newport News agreed to include savings from these facilities in the construction proposal. In addition to the contact named above, Karen Zuckerstein (Assistant Director), Matthew Butler, Kristine Hassinger, Michelle Liberatore, Aron Szapiro, and Molly Traci made major contributions to this report.
As fiscal constraints increasingly shape Navy shipbuilding plans, the pressure to increase efficiency mounts. Modernizing facilities and equipment at shipyards that build Navy ships can lead to improved efficiency, ultimately reducing the cost of constructing ships. In response to a request from the House Appropriations Subcommittee on Defense, GAO (1) identified investments in facilities and equipment at privately-owned shipyards over the last 10 years; (2) determined the Navy's role in financing facilities and equipment investments at these shipyards; and (3) evaluated how the Navy ensures investments result in expected outcomes. To address these objectives, GAO analyzed shipyard investment data over the past 10 years; interviewed shipyard, corporate, and Navy officials; and reviewed contracts, investment business cases, and other Navy and contractor documents. Over the past 10 years, large shipyards that build Navy ships used public and corporate funds to invest over $1.9 billion in facilities and equipment to improve efficiency, develop new shipbuilding capabilities, and maintain existing capabilities. Examples of each category include the following: (1) Improving efficiency--General Dynamics Bath Iron Works built a new facility--the Ultra Hall--that improves efficiency by allowing shipbuilders to access work space more easily in a climate-controlled environment. (2) Developing capabilities--Northrop Grumman Shipbuilding-Newport News built a replacement pier that allowed shipbuilders to work on two aircraft carriers simultaneously due to a Navy scheduling conflict. (3) Maintaining capabilities--General Dynamics Electric Boat invested to repair docks in order to maintain the shipyard's ability to launch and repair submarines. Investments at two smaller shipyards, Austal USA and Marinette Marine shipyard, were primarily to maintain and develop new capabilities as both are competing for new Navy contracts. Over the last 10 years, the Navy expanded its use of investment incentives and has recently provided some form of investment support at all large shipyards. To incentivize facility and equipment investments, the Navy has (1) released money early from the reserve of contract funds normally held back to ensure ships are delivered according to specifications, (2) accelerated asset depreciation schedules, (3) tied a portion of the contractor's fee to investing in new facilities and equipment, and (4) adjusted the contract share-line to give the contractor more of the savings if costs decrease. The Navy also manages funds appropriated by Congress for Hurricane Katrina relief at shipyards in the Gulf Coast. Outside of Hurricane Katrina funding, the Navy has not supported investments at the two smaller shipyards. Navy officials stated that the Navy has to negotiate investment incentives with large shipyards because limited competition and instability of Navy work does not foster an environment for shipyards to invest without incentives. Shipyard officials argued that instability in Navy shipbuilding plans makes it difficult to invest without guaranteed work from the Navy and incentives are necessary to help meet corporate minimum rates of return needed to justify an investment, especially given the large dollar amounts involved with some investments. The Navy lacks policy to help ensure it achieves goals and objectives from providing facility and equipment investment incentives at private shipyards. Absent this policy, individual program offices and contracting officers make decisions about what type of incentive to use, desired return on investments, and what kinds of investments to support. Without policy, program officers and contracting officers use different methods to validate expected outcomes and safeguard the Navy's financial support. GAO recommends that the Navy develop a policy that identifies the intended goals and objectives of investment incentives, criteria for using incentives, and methods for validating outcomes. The Department of Defense concurred with this recommendation.
During World War I, at a portion of American University and in other areas that became the Spring Valley neighborhood in Washington, D.C., the U.S. Army operated a large research facility to develop and test chemical weapons and explosives. After World War I, the majority of the site was returned to private ownership and was developed for residential and other uses. The site now includes, in addition to American University, about 1,200 private residences, Sibley Hospital, 27 embassy properties, and several commercial properties. In 1993, buried ordnance was discovered in Spring Valley, leading to its designation by the Department of Defense (Defense) as a FUDS currently comprising 661 acres. FUDS are properties that were formerly owned, leased, possessed, or operated by Defense or its components, and are now owned by private parties or other governmental entities. These properties, located throughout the United States and its territories, may contain hazardous, toxic, and radioactive wastes; unexploded ordnance; and/or unsafe buildings. Such hazards can contribute to deaths and serious illness or pose a threat to the environment. According to the U.S. Army, Spring Valley is the only FUDS where chemical agents were tested in what became a well-established residential neighborhood at the heart of a large metropolitan area. To fund the environmental restoration program, the Superfund Amendments and Reauthorization Act of 1986 (SARA) established the Defense Environmental Restoration Account. During the 5 most recent fiscal years (1997-2001), annual program funding for FUDS cleanups has decreased from about $255.9 million to about $231 million, with program funding estimated to decrease further to about $212.1 million by fiscal year 2003. By the end of fiscal year 2001, the Corps had identified 4,649 potential cleanup projects on 2,825 properties requiring environmental response actions. Through fiscal year 2001 (the latest figure available), the Corps had spent about $53.4 million on cleanup activities at Spring Valley. The principal government entities involved at the Spring Valley site are carrying out their roles and responsibilities under the Defense Environmental Restoration Program (environmental restoration program). The program was established by SARA, which amended the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA). Under the environmental restoration program, Defense is authorized to identify, investigate, and clean up environmental contamination at FUDS. Defense is required to consult with EPA in carrying out the environmental restoration program; EPA, in turn, has established written guidance under CERCLA for its activities at FUDS. Defense is also required to carry out activities under the environmental restoration program consistent with a statutory provision that addresses, among other things, participation by the affected states—in this case, the District of Columbia. Under the Corps’ program guidance, the District of Columbia has a role in defining the cleanup levels at the Spring Valley site. According to a District of Columbia Department of Health official, the department assesses the human health risks associated with any exposure to remaining hazards at Spring Valley. In carrying out their roles, these government entities have, over time, formed an active partnership to make important cleanup decisions. Under the partnership approach, each entity participates in the cleanup at Spring Valley. The Corps, with extensive experience in ordnance removal, is carrying out the physical cleanup. Other activities include the following: Identification of hazards: Defense consults with EPA and the District of Columbia on cleanup decisions at specified points in the environmental restoration process. EPA has provided assistance in identifying possible buried hazards by using photographic interpretation of aerial maps and providing technical expertise with regard to the presence of hazards in soil, water, and air. Assessing human health risks: According to the District of Columbia’s Department of Health, the department assesses the human health risks associated with any exposure to remaining hazards at Spring Valley. In addition, the District of Columbia, together with the Agency for Toxic Substances and Disease Registry (ATSDR), has been investigating whether residents have actually been exposed to arsenic in the soil. Selecting a cleanup level: The entities are currently finalizing decisions on a cleanup level for arsenic that will determine how much contamination can be left in the soil throughout the site without endangering human health and the environment. While the entities have not agreed on all cleanup decisions, officials of all three entities state that the partnership has been working effectively in the recent past. Continued progress at the site will depend, in part, on the effectiveness of this partnership over the duration of the cleanup. Although the U.S. Army twice concluded that it had not found any evidence of large-scale burials of hazards remaining at Spring Valley, an accidental discovery of buried ordnance and subsequent investigations have led to the discovery of additional munitions and chemical contamination. In March 1986, American University was preparing to begin the largest construction project in its history. At the request of American University, the U.S. Army reviewed historical documents and available aerial photographs of the site taken during the World War I era to determine whether chemical munitions might have been buried on campus. Based on the results of its review, in October 1986, the U.S. Army concluded that no further action was needed. However, in January 1993, a utility contractor accidentally uncovered buried ordnance at another location in the Spring Valley site. The U.S. Army immediately responded and, by February 1993, had removed 141 pieces of ordnance, 43 of which were suspected chemical munitions (but most were destroyed before being tested). Immediately following this removal, the Corps began to investigate the site. To focus its investigation, the Corps identified 53 locations with the greatest potential for hazards. During the investigation, the Corps conducted subsurface (geophysical) soil surveys with metal detectors to identify buried ordnance and analyzed soil samples to identify chemical contamination. The Corps’ soil surveys led the Corps to identify and remove one piece of ordnance containing a suspected chemical agent, 10 expended pieces of ordnance, an empty bomb nose cone, and several fragments of ordnance scrap. Concurrently with the Corps’ investigation, another piece of ordnance was found by a builder during construction activities, and two pieces of ordnance were anonymously left for the Corps to find. Based on the results of soil sampling and the ensuing risk assessment, the Corps concluded that no remedial action was needed. Following this investigation, in June 1995, the U.S. Army determined that no further action was required at the Spring Valley site, except for an area that contained concrete shell pits, or bunkers, referred to as the Spaulding/Captain Rankin Area, which was then still under investigation. Subsequent sampling and a risk assessment indicated that no remedial action was necessary, and in June 1996, the Corps recommended that no further action be taken at this area as well. In 1997, the District of Columbia raised a number of concerns about how the Corps had completed its investigation. In response, the Corps reviewed its work at the site and concluded that it had incorrectly located one of the potentially hazardous locations it had previously investigated, which should have been situated on a property owned by the Republic of Korea (South Korea) on Glenbrook Road. In February 1998, the Corps surveyed the soil on the South Korean property and identified two potential burial pits. By March 2000, the Corps had completed the removal of 288 pieces of ordnance, 14 of which were chemical munitions; 175 glass bottles, 77 of which contained various chemicals, including mustard and lewisite; and 39 cylinders and 9 metal drums. Subsequent soil sampling conducted by EPA led the Corps to remove arsenic-contaminated soil from these properties. By May 2001, the Corps had removed about 4,560 cubic yards of arsenic-contaminated soil from the South Korean property and the adjacent property. As of April 2002, the Corps had not yet removed contaminated soil from the third property, which is the American University President’s residence. After the discovery of hazards on the Glenbrook Road properties, in January 2000, at the request of the District of Columbia, the Corps expanded its arsenic investigation to include about 60 nearby residences and the southern portion of the American University campus. Sampling at these locations indicated that the Corps needed to remove arsenic- contaminated soil from the American University Child Development Center and other locations on the American University campus, and 11 residential properties. As of April 2002, the Corps had removed about 1,063 cubic yards of contaminated soil from American University. At a public meeting in February 2001, community members urged testing the entire Spring Valley neighborhood for arsenic. The Corps began to test all 1,483 properties within the Spring Valley site for arsenic in May 2001. As of April 2002, the Corps had identified about 160 properties that will require some degree of cleanup, with 7 identified for priority removals of arsenic-contaminated soil because they present relatively higher risks of exposure. Recently, the District of Columbia’s Department of Health has urged the Corps to consider including nine additional properties on the list. In addition, the Corps has sampled for additional chemicals at selected locations as a result of information it has about what type of research activities might have occurred at the locations in the past. The results of the sampling are currently under review, but preliminary results have not identified any additional chemicals of concern, according to the Corps. In May 2001, at the urging of the District of Columbia and EPA, the Corps began to investigate an additional burial pit on the property line between the South Korean property and the adjoining residence on Glenbrook Road. The Corps is continuing to investigate the burial pit, and as of January 2002, had found 379 pieces of ordnance, 11 of which contained the chemical warfare agents mustard and lewisite; fragments of another 8 pieces of ordnance; 60 glass bottles and 3 cylinders, 24 of which contained mustard, lewisite, and acids; and 5 metal drums that showed signs of leakage. Concurrently with the efforts to expand the arsenic investigation, the Corps is planning to expand its efforts to survey properties for buried ordnance. The Corps plans to begin excavating two properties on Sedgwick Street where surveys indicate the presence of buried metallic objects that could possibly be pieces of ordnance. In addition, the Corps, in conjunction with EPA and the District of Columbia, is developing a list of properties to be geophysically surveyed for potential buried ordnance. Site-specific information, such as the results of a review performed by EPA’s Environmental Photographic Interpretation Center, will be factored into determining priorities for surveying these additional sites. As of April 2002, the Corps had estimated that a total of 200 properties would be surveyed for ordnance. The government entities recognize that the extent to which hazards remain may never be known with certainty because of the technical limitations associated with sampling and geophysically surveying soil. At Spring Valley, cleanup decisions depend on the immediacy of the safety and human health risks presented. Throughout the cleanup of the site, identification and removal of buried ordnance have been and continue to be the government entities’ top priorities in terms of human health concerns and cleanup decisions. The partners have agreed to remove buried ordnance as soon as possible after its discovery. Accordingly, since early in the Spring Valley cleanup effort, removal of buried ordnance has taken priority over other tasks. The partners also attempt to set priorities for cleaning up properties containing elevated levels of chemicals or metals in soil on the basis of the risk the hazards pose. Although many chemical agents were tested at Spring Valley during World War I, of those contaminants now present at elevated levels, arsenic is deemed to pose the greatest risk to human health and therefore is the contaminant of most concern to the partners. During its remedial investigation of the site from 1993 to 1995, the Corps used EPA’s criteria to assess the health risks associated with these hazards to determine whether further sampling or soil removal was necessary. This assessment found no elevated health risk requiring remedial action. Arsenic was not identified as a contaminant of potential concern for the risk assessment, since, according to the Corps, the sampling results of the arsenic level in the soil were not significantly different from naturally occurring levels. EPA noted that it was involved in the oversight of the cleanup and did not object to the decision made at the time. However, since early 1999, with the additional discovery of buried ordnance and elevated levels of arsenic-contaminated soil at the South Korean property, the arsenic levels in the soil have become the primary focus of soil cleanup efforts. Arsenic exposure at certain doses in drinking water has been generally linked to cancers and other adverse health conditions. Based on scientific studies, the District of Columbia has identified lung cancer, bladder cancer, and skin cancer as effects associated with the long-term ingestion of arsenic. However, the precise extent to which arsenic is present and residents are exposed through ingestion, inhalation, or external contact at Spring Valley is unknown, and recent and ongoing efforts are directed at providing this information. Soil sampling: Through soil sampling, the partners have attempted to detect levels of arsenic in the soil to assist in ascertaining health risks and to set priorities for cleanup. Recent sampling results have registered elevated levels of arsenic in the soil at certain residences. Consequently, the District of Columbia’s Department of Health has requested that additional properties be added to the priority removal list. Exposure testing: After the Corps confirmed elevated arsenic soil levels at American University’s Child Development Center, at the request of the District of Columbia, ATSDR conducted an exposure study to determine the extent of arsenic exposure in children and employees at the site. After testing hair samples, ATSDR concluded that the children and employees had had no significant exposure to arsenic. At the request of the District of Columbia, ATSDR is conducting another exposure study (biomonitoring), in which it is studying the level of arsenic present in biological samples from residents on Spring Valley properties with the highest levels of arsenic in the soil. The individual results from the biological samples collected during the exposure investigation were mailed to the residents and were reviewed and discussed by the Mayor’s Scientific Advisory Panel. During the Panel’s recent meeting, several members noted that this study was a small sample screening investigation, not a full scientific human research project or epidemiological study. The Panel discussed the possibility of ATSDR’s continuing a screening investigation during the summer months. Descriptive epidemiological studies: The District of Columbia has also conducted descriptive epidemiological studies in an attempt to assess the arsenic-related health effects in Spring Valley compared with two control groups as well as with the nationwide incidence and mortality rates for certain cancers. The studies examined bladder, skin, lung, liver, and kidney cancers. However, the number of cases of liver and kidney cancers at Spring Valley was too small to conduct a meaningful statistical analysis. Of bladder, skin, and lung cancers, however, the District of Columbia observed no excesses of cancer incidence and mortality in Spring Valley. Residents have raised concerns about the extent of the population studied and completeness of data used for the exposure tests and epidemiological studies. For example, some residents have voiced concerns that the full suite of hazards—not just arsenic—present at Spring Valley, even at trace levels, has not been factored into exposure and epidemiological studies. The District of Columbia and the Corps have indicated that mustard agent was found in containers in the pit discovered at Glenbrook Road in May 2001. The District of Columbia’s Department of Health does not plan to study exposure to mustard agent, however, because it did not identify a pathway of exposure to mustard agent that could produce a dose resulting in adverse human health effects. The District of Columbia’s Department of Health has told Spring Valley residents that, if necessary, it will expand the investigation to hazards other than arsenic, if the hazard is found at levels of concern in Spring Valley. Under the environmental restoration program, the Secretary of Defense is required to report annually to the Congress on the progress the department has made in carrying out environmental restoration activities at military installations and FUDS. From fiscal years 1997 through 2001 (the most recent report available), the total estimated cost to clean up Spring Valley reported by Defense increased by about six-fold, from about $21 million to about $124.1 million. In response to our request, the U.S. Army provided us with a more up-to-date estimate. As of April 2002, the Corps had slightly revised its estimated cost to about $125.1 million, as shown in figure 1. Costs have increased principally because the Corps needed to increase the scope of the remaining cleanup, as more information about the site became known. For example, from fiscal year 2000 to fiscal year 2001, the Corps doubled its estimate of the cost to complete the cleanup to include the cost of expanding the scope of planned investigation activities. In fiscal year 2000, the Corps estimated that completing the cleanup would cost about $35.8 million. In fiscal year 2001, the Corps raised its estimate to about $72.9 million to include the cost of sampling the entire Spring Valley site for arsenic-contaminated soil, geophysically surveying selected properties for buried ordnance, and completing additional work needed to remove buried hazards at one location. As of April 2002, the Corps slightly lowered its fiscal year 2001 estimate to about $71.7 million, as the preliminary results of the sitewide soil sampling yielded additional information about the extent of arsenic contamination. The Corps’ latest estimate of the cost to complete the cleanup depends on assumptions the Corps has made about how many properties will require the removal of arsenic-contaminated soil and how many properties will need to be surveyed and excavated to remove possible buried hazards. For example, as of April 2002, the Corps estimated that, in addition to the ordnance and soil removal activities taking place at the South Korean property and two adjacent properties, arsenic-contaminated soil will need to be removed from another 161 properties and 202 properties will need to be excavated for possible buried ordnance. Despite the large increases in the scope and cost of the remaining cleanup work, in April 2002, the Corps shortened its estimate of the time to complete the cleanup by 5 years, projecting completion in fiscal year 2007. Prior to fiscal year 2000, Defense’s annual reports to the Congress did not provide any estimate of when the Corps planned to complete cleanup activities at Spring Valley. In Defense’s fiscal year 2000 annual report to the Congress, the Corps estimated that it would complete such activities by the end of fiscal year 2012. The Corps plans to meet the shortened time frame by applying considerably more funding to the site in the near term. However, we question whether the Corps will be able to achieve its planned completion even if there are no further changes to the scope of work. As part of its April 2002 revised estimate, the Corps acknowledged that meeting the schedule would depend on the FUDS budget and the U.S. Army’s ability to apply the specified funding to the Spring Valley site. In order to continue to meet these needs, the U.S. Army may have to reprogram funds from possible use at other sites nationwide in each of the remaining years of the cleanup. Furthermore, in fiscal year 2002, the Corps planned to allocate to Spring Valley about 8 percent of the national budget for FUDS—which has declined in recent years—and about 86 percent of the FUDS budget for the Baltimore District, which includes funding for FUDS in six states and the District of Columbia. According to the U.S. Army, the provision of funds for the Spring Valley cleanup is already adversely affecting the availability of funding and progress at other sites. As more information becomes available about the hazards at the site, the Corps will develop a clearer sense of how reliable its assumptions are on the extent of the hazards present and the cost of removing them. The Corps’ experience with excavating buried hazards at two Glenbrook Road properties illustrates the difficulty of estimating the cost of removing buried hazards. In fiscal year 2002, the Corps determined that completing the removal would cost about $6 million more than anticipated at the end of fiscal year 2001. Furthermore, the Corps assumed that arsenic would remain the focus of its efforts to reduce the risks of exposure to contaminated soil, and based its cost estimate on the work needed to meet a proposed cleanup level for arsenic; as of April 2002, the partners had not finalized this level. As part of its expanded soil sampling efforts, the Corps could identify the presence of yet other chemicals and expand the scope of soil removal. Until more complete information is known about the actual types and extent of the hazards present throughout the site and the actual cost of removing them, the reliability of the Corps’ estimate of the cost and schedule to complete the cleanup remains uncertain. We found data on 58 properties in the District of Columbia where hazards resulting from federal activities have been found, using Defense data as of March 2002, EPA data as of April 2002, and District of Columbia data as of January 2002. These properties included 8 active Defense installations and 30 FUDS. For an active Defense installation, the host military branch of the installation is responsible for the cleanup, while the Corps is responsible for the cleanup of all FUDS. We also found six properties involving other federal agencies, including the Department of Agriculture and the National Park Service. Hazards at these sites include, among others, ordnance and explosive waste; hazardous, toxic, and radioactive waste; polychlorinated biphenyls (PCB); petroleum by-products; solvents; and heavy metals contamination. Finally, we found data on 30 federal properties (including 16 of the properties already identified) in the District of Columbia on which remediation of leaking underground storage tanks was in process, as of January 2002. Hazards at these sites include, among others, diesel fuel, gasoline, heating oil, kerosene, and waste oil. - - - - - In conclusion, Madam Chairwoman, a number of interdependent uncertainties continue to affect the progress of the Spring Valley cleanup. Until some of the existing uncertainties are resolved, the government entities will not be able to provide the community with definitive answers on any remaining health risks or the cost and duration of the cleanup. This concludes my prepared statement. I will be happy to respond to any questions from you or other Members of the Subcommittee.
During World War I, the U.S. Army operated a large research facility to develop and test chemical weapons and explosives in the area that became the Spring Valley neighborhood in Washington, D.C. Buried ordnance, discovered there in 1993, led to the designation by the Department of Defense (DOD) of 61 acres as a formerly used defense site. Through fiscal year 2001, DOD had spent over $50 million to identify and remove hazards at the site. The government entities involved have identified and removed a large number of hazards, but the number remaining is unknown. The health risks influencing cleanup activities at Spring Valley are the possibility of injury or death from exploding or leaking ordnance and containers of chemical warfare agents and potential long-term health problems from exposure to arsenic-contaminated soil. As of April 2002, the U.S. Army estimated that the remaining cleanup activities would cost $7.1 million and take 5 years, but these estimates are unreliable. This testimony summarized a June report (See GAO-02-556).
Productivity is defined as the efficiency with which inputs are used to produce outputs. It is measured as the ratio of outputs to inputs. Productivity and cost are inversely related—as productivity increases, average costs decrease. Consequently, information about productivity can inform budget debates as a factor that explains the level or changes in the cost of carrying out different types of activities. Improvements in productivity either allow more of an activity to be carried out at the same cost or the same level of activity to be carried out at a lower cost. IRS currently relies on output-to-input ratios such as cases closed per FTE to measure productivity and productivity indexes. A productivity change is measured as an index which compares productivity in a given year to productivity in a base year. Measuring productivity trends requires choosing both output and input measures, and the methods for calculating productivity indexes. In the past we have reported on declining enforcement trends, finding in 2002 that there were large and pervasive declines in six of eight major compliance and collection programs we reviewed. In addition to reporting these declines, we reported on the large and growing gap between collection workload and collection work completed and the resultant increase in the number of cases where IRS has deferred collection action on delinquent accounts. In 2003, we reported on the declining percentage of individual income tax returns that IRS was able to examine or audit each year, with this rate falling from 0.92 percent to 0.57 percent between 1993 and 2002. Since 2000, the audit rate has increased slightly but not returned to previous levels. IRS conducts two types of audits: field exams that involve complex tax issues and usually face-to-face contact with the taxpayer, and, correspondence exams that cover simpler issues and are done through the mail. We also reported on enforcement productivity measured by cases closed per FTE employee, finding that IRS’s telephone and field collection productivity declined by about 25 percent from 1996 through 2001 and productivity in IRS’s three exam programs—individual, corporate, and other audit—declined by 31 to 48 percent. In January 2004 we reported on the extent to which IRS’s Small Business and Self-Employed (SB/SE) division followed steps consistent with both GAO guidance and the experience of private sector and government organizations when planning its enforcement process improvement projects. We reported on how the use of a framework would increase the likelihood that projects target the right processes for improvement and lead to the most fruitful improvements. In that report, we also reported that more complete productivity data—input and output measures adjusted for the complexity and quality of cases worked—would give SB/SE managers a more informed basis for decisions on how to identify processes that need improvement, improve processes, and assess the success of process improvement efforts. This report elaborates on that recommendation, providing more information about the challenges of obtaining complete productivity data. Improving productivity by changing processes is a strategy SB/SE is using to address these declining trends. However, the data available to SB/SE managers to assess the productivity of their enforcement activities, identify processes that need improvement, and assess the success of their process improvement efforts are only partially adjusted for complexity and quality of cases worked. This problem of adjusting for quality and complexity is not unique to SB/SE process improvement projects—the data available to process improvement project managers are the same data used throughout SB/SE to measure productivity and otherwise manage enforcement operations. Because IRS provides services, such as providing information to taxpayers and enforcing the tax laws, that are intangible and complex, measuring output—and therefore productivity—is challenging. Like other providers of intangible and complex services, IRS has a choice of measuring activities or the results of its services. Generally, information about results is preferred, but measuring results is often difficult. In the absence of direct measures of results, activities that are closely related to the results of the service can be used as proxies. Measuring productivity in services is difficult. Unlike manufacturing, which lends itself to objective measurement because output can be measured in terms of units produced, services, which involve changes in the condition of people receiving the service, often have intangible characteristics. Thus, the output of an assembly line is easier to measure than the output of a teacher, doctor, or lawyer. Services may also be complex bundles of individual services, making it difficult to specify the different elements of the service. For example, financial services provide a range of individual services, such as financial advice, accounts management and processing, and facilitating financial transactions. IRS provides a service. IRS’s mission, to help taxpayers understand and meet their tax responsibilities and to apply the tax law with integrity and fairness, requires IRS to provide a variety of services ranging from collecting taxes to taxpayer education. IRS, like other service providers, could measure output in terms of its results—its impact on taxpayers—or in terms of activities. The results of IRS’s service are the impacts on the condition or behavior of taxpayers. These taxpayer conditions or behaviors include their compliance with the tax laws, their compliance burden (the time and money cost of complying with tax laws), and their perception of how fairly taxpayers are treated. IRS’s activities are what IRS does to achieve those results. These activities include phone calls answered, notices sent to taxpayers, and exams conducted. Generally, information about results is preferred, but measuring such results is often difficult. In the case of the public sector, this preference is reflected in GPRA, which requires that federal agencies measure performance, whenever possible, in terms of results or outcomes for people receiving the agencies’ services. However, results such as compliance and fairness have intangible characteristics that are difficult to measure. In addition, results are produced in complicated and interrelated ways. For example, a transaction or activity may affect a number of results: IRS’s exams may affect taxpayers’ compliance, compliance burden, and perceptions of the fairness of the tax system. In addition, a result may be influenced by a number of transactions or activities: A taxpayer’s compliance may be influenced by all IRS exams (through their effect on the probability of an exam) as well as by other IRS activities such as taxpayer assistance services. IRS’s activities are easier to measure than results but still present challenges. Activities are easier to measure because they are often service transactions such as exams, levies issued, or calls answered that can be easily counted. However, unlike measures of results, more informative measurement of activities requires that they be adjusted for quality and complexity, as we noted in our report on IRS’s enforcement and improvement projects. A productivity measure based on activities such as cases closed per FTE may be misleading if such adjustments are not made. For example, an increase in exam cases closed per FTE would not indicate an increase in true productivity if the increase occurred because FTEs were shifted to less complex cases or the examiner allowed the quality of the case review to decline to close cases more quickly. Activities-based productivity measures can provide IRS with useful information on the efficiency of IRS operations. Measuring output, and therefore productivity, in terms of activities provides IRS with measures of how efficiently it is using resources to perform specific functions or transactions. However, activities do not constitute—and should not be mistaken for—measures of IRS’s productivity in terms of ultimate results. While the productivity measures we have examined are based on activities, IRS has efforts under way to measure results such as compliance and compliance burden. Recently, we reported on IRS’s National Research Program (NRP) to measure voluntary compliance and efforts to measure compliance burden. As we mentioned previously, measuring these results is difficult. For some results, such as compliance, measurement is also costly and intrusive because taxpayers must be contacted and questioned in detail. Despite these difficulties, IRS can improve its productivity measurement by continuing its efforts to get measures of results. These efforts would give Congress and the general public a better idea of what is being achieved by the resources invested in IRS. In the absence of direct measures of results, activities that are closely related to the results of the service are used as proxies. The value of these proxies depends on the extent to which they are correlated with results. By carefully choosing these measures, IRS may gain some information about the effect of its activities on ultimate results. Because activities may affect a number of results and a single result may be affected by a number of activities, a single activity likely will not be a sufficient proxy for the results of the service. Therefore, a variety of activities would likely be necessary as proxies for the results of the service. Both types of output measures, those that reflect the results of IRS’s service and those that use activities to measure internal efficiency, should be accurate and consistent over time. In addition, both output measures should be reliably linked to inputs. Linking the results of IRS’s service to inputs may be difficult because of outside factors that may also affect measured results. For example, an increase in compliance could result both from IRS actions such as exams and from changes in tax laws. Another challenge is that IRS currently has difficulties linking inputs to activities, as we note in a previous report, where we reported IRS’s lack of a cost accounting system. In particular, IRS only recently implemented a cost accounting system, and has not yet determined the full range of its cost information needs. Table 1 summarizes some of the key differences between activities and results measures. Table 1 also indicates some general criteria that apply to both types of measures. Because inputs are more easily measured and identifiable than outputs, measuring them is more straightforward. IRS, as a government agency, may be able more often to use labor costs or hours as a single input in its productivity measures because it relies heavily on labor. However, it may be particularly important for IRS to use a multifactor measure that includes capital along with labor during periods of modernization that involve increased or high levels of capital investment. As with outputs, inputs should be measured accurately and consistently over time. Measuring inputs consistently over time may require adjusting for changes in the quality of the labor, which has been done using proxies such as education level or years of experience. Also, as mentioned previously, inputs should be reliably linked to outputs. The appropriate method for calculating productivity depends on the purpose for which the productivity measure is being calculated. The alternative methods that can be used for calculating productivity range from computing single ratios—exams closed per FTE—to using complex statistical methods to form composite indexes that combine multiple indicators of outputs and inputs. While single ratios may be adequate for certain purposes, the composite indexes based on statistical methods may be more useful because they provide information about the sources of productivity change. Comparing the ratios of outputs to inputs at different points in time defines a productivity index that measures the percentage increase or decrease in productivity over time. The ratios that form the index may be single, comparing a single output to a single input or composite, where multiple outputs and inputs are compared. The single ratios may be useful for evaluating the efficiency of a single noncomplex activity. Composite indexes can measure the productivity of more complicated activities, controlling for complexity and quality. Composite indexes can also be used to measure productivity of resources across an entire organization, where many different activities are being performed. One method of producing composite indexes is to use weights to combine such disparate activities as telephone calls answered and exams closed. One common weighting method, used by the Bureau of Labor Statistics (BLS), is a labor weight. Weighting outputs by their share of labor in a baseline period controls for how resources are allocated between different types of outputs. If the productivity of two activities is unchanged but resources are reallocated between the activities, the composite measure of productivity would change unless these weights are employed. For example, if IRS reallocates exam resources so that it does more simple exams and fewer complex exams, the number of total exams might increase. Consequently, a single productivity ratio comparing total exams to inputs would show an increase. Labor weighting deals with this issue. The weights allow any gains from resource reallocation to be distinguished from gains in the productivity of the underlying activities. When types of activities can be distinguished by their quality of complexity, labor weighting can also be used to control for quality and complexity differences when resources are shifted between types of outputs. More complicated statistical methods can be used for calculating composite indexes that allow for greater flexibility in how weights are chosen to combine different outputs and for a wider range of output measures that include both qualitative and quantitative outputs. Data Envelopment Analysis (DEA), which has been widely used to measure the productivity of private industries and public sector services, is an example of such methods DEA estimates an efficiency score for each producing unit, such as the firms in an industry or the schools in a school district, or for IRS, the separately managed areas and territories composing its business units. DEA estimates the relative efficiency of each producing unit by identifying those units with the best practice—those making the most efficient use of inputs, under current technology, to produce outputs—and measuring how far other units are from this best practice combination of inputs used to produce outputs. DEA estimates provide managers with information on how efficient they are relative to other units and the costs of making individual units more efficient. These efficiency scores are used to form a composite productivity index called a Malmquist index. An advantage of the Malmquist index is that IRS managers can restrict the weights to adjust for managerial or congressional preferences to investigate the effect on productivity of a shift, for example, from an organization that emphasizes enforcement to one that emphasizes service. IRS can also include many different types of outputs and inputs, control for complexity and quality, and isolate the effects of certain historical changes, such as the IRS Restructuring and Reform Act of 1998 (RRA98). Another advantage of the Malmquist index is that productivity changes can be separated into their components, such as efficiency and technology changes. In this context, efficiency can be measured holding technology constant, and technology can be measured holding efficiency constant. Holding technology constant, IRS might improve productivity by improving the management of its existing resources. On the other hand, technology changes might improve productivity even if the management of resources has not changed. Thus, the productivity change of a given IRS unit is determined by both changes in its efficiency relative to the current best- practice IRS units and changes in the best practices or technology. Currently available IRS data can be used to produce productivity indexes that control for complexity and quality. The examples that follow focus on productivity indexes that use exams closed as outputs and FTEs as inputs. The data on examinations cover individual returns across IRS and IRS’s LMSB division. For both individuals and LMSB, the complexity and quality of exams can vary over time. For example, the proportion of exams that are correspondence versus field, business versus nonbusiness, and EIC versus non-EIC can vary over time. As already discussed, failing to take account of such variation can give a misleading picture of productivity change. While these examples do not encompass all the methods, data, and adjustments that may be used, they illustrate the benefits of the additional analysis that IRS can perform using current data. In addition, as we pointed out in our 2004 report, IRS can improve its productivity measurement by investing in better data, taking into account the costs and benefits of doing so. These better data include measures of complexity, improved measures of quality, and additional measures of output. Figures 1 through 4 illustrate, using currently available data between fiscal years 1997 and 2004, the difference between weighted indexes that make an adjustment for complexity and unweighted indexes that make no adjustments. In the illustrations, a labor-weighted composite index, which can control for complexity, is contrasted with a single unweighted index to show how the simpler method may be misleading. (See app. I for a fuller description of the labor-weighted index.) In each case, complexity is proxied by type of exam. Although the data were limited (for example, the measure of complexity was crude), the illustrations show that making the adjustments that are possible provides a different picture of productivity than would otherwise be available. The advantage of weighted indexes is that they allow changes in the mix of exams to be separated from changes in the productivity of performing those exams. In the examples that follow, an unweighted measure could be picking up two effects. One effect is the change in the number of exams that an auditor can complete if the complexity or quality of the exam changes. The second effect is the change in the number of exams an auditor can complete if the time an auditor requires to complete an exam changes, holding the quality and complexity of exams constant. By isolating the latter effect, the weighted index more closely measures productivity, or the efficiency with which the auditor is working the exams. For individual exams, the comparison of productivity indexes shows that the unweighted index understates the decline in productivity. As figure 1 shows, between fiscal years 1997 and 2001, the unweighted productivity index declined by 32 percent while the weighted index declined by 53 percent. The difference is due largely to the increase in EIC exams during the period. Over the period between fiscal years 1997 and 2001, exams were declining. However, the mix of exams was changing, with increases in the number of EIC exams. EIC exams are disproportionately correspondence exams, and IRS can do these exams faster than field exams. IRS shifted to “easier” exams, and that shift caused the unweighted index to give an incomplete picture of productivity. The shift masked the larger productivity decline shown by the weighted index. Figure 2 provides additional evidence to support the conclusion that the shift to more EIC exams is the reason for the difference in productivity shown in figure 1. Between fiscal years 1997 and 2001, the weighted and unweighted indexes track each other very closely when the EIC exams are removed. Both show a decline in productivity of about 50 percent over this period. The available data were not sufficient to control for other factors that may have influenced exam productivity. For example, RRA98 imposed additional requirements on IRS’s auditors, such as certifications that they had verified that past taxes were due. Figure 3 compares unweighted and weighted productivity indexes for exams done in LMSB division. As figure 3 shows, between fiscal years 2002 and 2004, the unweighted productivity index increased by 4 percent, while the weighted index increased by 16 percent. This difference appears largely due to the individual exams and small corporate exams done in LMSB. Over the period, total exams were declining but the mix of exams was changing. LMSB was shifting away from less labor-intensive individual returns and small corporation returns to more complex business industry and coordinated industry return exams. This shift caused the unweighted index to give an incomplete picture of productivity. Here, the shift masked the larger productivity increase as shown by the weighted index. Figure 4 provides additional evidence to support the conclusion that the shift away from individual and small corporate exams is the reason for the difference in productivity shown in figure 3. Between fiscal years 2002 and 2004, when individual and corporate exams are excluded, the two indexes track more closely, with the unweighted index increasing by 15 percent and the weighted index by 17 percent. There is evidence that adjusting for quality would show that LMSB’s productivity increased more than is apparent in figures 3 and 4 for the years 2002 to 2004. Average quality scores available for selected types of LMSB exams show quality increasing over the 2-year period. Adjusting for this increase in quality, in addition to adjusting for complexity, would show a productivity increase for these types of exams of 28 percent over the period. While labor-weighted and other more sophisticated productivity indexes can provide a more complete picture of productivity changes, they do not identify the causes of the changes. These productivity indexes would be the starting point for any analysis to determine the causes of productivity changes. Another example of the advantages of weighted productivity indexes is provided by IRS. As noted earlier, IRS has developed a weighted submission processing productivity measure. The measure adjusts for differences in the complexity of processing various types of tax returns. In an internal analysis, IRS showed how productivity comparisons over time and across the 10 processing centers depended on whether or not the measure was adjusted for complexity. For example, the ranking of the processing centers in terms of productivity changed when the measure was adjusted for the complexity of the returns being processed. The more sophisticated methods for measuring productivity can provide IRS and Congress with better information about IRS’s performance. By controlling for complexity and quality, IRS managers would have more complete information about the true productivity of activities, such as exams, that can differ in these dimensions. In addition, the weighted measures can be used to measure productivity for the organization, where many different activities are being performed. More complete information about the productivity of IRS resources should be useful to both IRS managers and Congress. More complete productivity measures would provide better information about the effectiveness of IRS resources, IRS’s budget needs, and IRS’s efforts to improve efficiency. Although there are examples, such as the submission processing productivity measures, of IRS using weighted measures of productivity, IRS officials said they generally use single ratios as measures of productivity. That is consistent with our 2004 report on IRS’s enforcement improvement projects, where we reported on SB/SE’s lack of productivity measures that adjust for complexity and quality. While there would be start-up costs associated with any new methodology, the long-term costs to IRS for developing more sophisticated measures of productivity may be modest. The examples so far in this section demonstrate the feasibility of developing weighted productivity indexes using existing data. Relying on existing data avoids the cost of having to collect new data. The fact that IRS already has some experience implementing weighted productivity measures could reduce the cost of introducing more such measures. As we stated previously, IRS could also improve its productivity measurement by getting better data on quality and complexity. These improved data could be integrated with the methods for calculating productivity illustrated in this report to further improve IRS’s productivity measurement. However, as we acknowledged in our prior report, collecting additional data on quality and complexity may require long-term planning and an investment of additional resources. Any such investment, we noted, must take account of the costs and benefits of acquiring the data. Using more sophisticated methods, such as those summarized in this report, for tracking productivity could produce a much richer picture of how IRS manages its resources. This is important not only because of the size of IRS—it will spend about $11 billion in 2005 and employ about 100,000 FTEs—but also because we are entering an era of tight budgets. A more sophisticated understanding of the level of productivity at IRS and the reasons for productivity change would better position IRS managers to make decisions about how to effectively manage their resources. Such information would also better enable Congress and the public to assess the performance of IRS. As we illustrate, more can be done to measure IRS’s productivity using current data. However, another advantage of using more sophisticated methods to track productivity is that the methods will highlight the value of better data. Better information about the quality and complexity of IRS’s activities would enable the methods illustrated in this report to provide even richer information about IRS’s overall productivity. We recommend that the Commissioner of Internal Revenue put in place a plan for introducing wider use of alternative methods of measuring productivity, such as those illustrated in this report, taking account of the costs of implementing the new methods. The Commissioner of Internal Revenue provided written comments on a draft for this report in a June 23, 2005, letter. The Commissioner agreed with our recommendation to work on introducing wider use of alternative measure of productivity. Although expressing some caution, he has asked his Deputy Commissioner for Services and Enforcement to work with IRS’s Research, Analysis, and Statistics office to assess the possible use of alternative methods of measuring productivity. The Commissioner recognized that a richer understanding of organizational performance is crucial for effective program delivery. As agreed with your office, unless you publicly release its contents earlier we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to interested congressional committees, the Secetary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. We will also make copies available to others on request. If you or your staff have any questions, please contact me at (202) 512-9110. I can also be reached by e-mail at whitej@gao.gov. Key contributors to this assignment were Kevin Daly, Assistant Director, and Jennifer Gravelle. Methods for calculating productivity range from computing single ratios to using statistical methods. In its simplest form, a productivity index is the change in the productivity ratio over time relative to a chosen year. However, this type of productivity index allows for only a single output and a single input. To account for more than one output, the outputs must be combined to produce a productivity index. One method is to weight the outputs by their share of inputs used in the chosen base year. In a case where only labor input is used, following this method provides a labor-weighted output index, which, when divided by the input index, produces the labor-weighted productivity index. The use of the share of labor used in each output effectively controls for the allocation of labor across the outputs over time. For example, if productivity in producing two outputs remained fixed over time, a single productivity index may show changes in productivity if resources are reallocated to produce more of one of the outputs. The Bureau of Labor Statistics (BLS) has also used labor-weighted indexes. BLS published, under the Federal Productivity Measurement Program, data on labor productivity in the federal government for more than two decades (1967-94). Due to budgetary constraints, the program is now terminated. BLS’s measures used the “final outputs” of a federal program, which correspond generally to what we have called intermediate outputs in this report, as opposed to the outcomes or results of the program. BLS used labor weights because of their availability and their close link to cost weights. In particular, as with the labor weights in our illustrations, BLS used base year labor weights and updated the weights every 5 years. It relied only on labor and labor compensation, and acknowledges that the indexes did not reflect changes in the quality of labor. BLS measured productivity for a number of federal programs, ranging from social and information services to corrections. However, BLS did not produce productivity measures for IRS. In addition to weighted productivity indexes, there are a number of composite productivity indexes designed to include all the inputs and outputs involved in production. This group of indexes is called Total Factor Productivity (TFP) indexes. They are called total because they include all the inputs and outputs, as opposed to Partial Factor Productivity indexes, which relate only one input to one output. Many of the main TFP indexes, including Tornqvist, Fisher, Divisia, and Paache, require reliable estimates of input and output prices, data not available for industries in the public sector. Therefore we use the Malmquist index, which does not require that data. Malmquist indexes are TFP indexes based on changes in the distance from the production frontier, or distance functions. These distance functions are estimated using Data Envelopment Analysis (DEA). Productivity change is represented by the ratio of two different period distance functions. The Malmquist index is the geometric average of these productivity changes (evaluated at the two different periods). This index can be further decomposed into efficiency and technology changes. From the decomposition of the Malmquist index, productivity change can be shown to equal the efficiency change times the technology change. (xt+1,yt+1)/D(x,y)]*[Dt+1(xt+1,yt+1)/ Dt+1(x,y)]}^1/2, where x, xt+1 denote the vector of inputs at time t and t+1, and y, and yt+1 denote the vector of outputs in time t and t+1 and D and Dt+1 are distance functions relative to the technology in time t and t+1. (x,y)]*[Dt+1(xt+1,yt+1)/ Dt+1(x,y)]}^1/2 = [Dt+1(xt+1, yt+1)/ D(x,y)]*{[ D(xt+1,yt+1)/ Dt+1(xt+1,yt+1)]*[D(x,y)/ Dt+1(x,y)]}^1/2=E*T, the efficiency change, E, times the technology change, T. fraction in the first, indicates a movement away from one over time and thus declining productivity. Thus, a productivity change less than one indicates declining productivity and therefore an efficiency change less than one also indicates declining efficiency. Alternatively, if the efficiency change was one, then the productivity change equals the technology change. Following previous analysis, a productivity change less than one indicates declining productivity. Therefore, a technology change less than one indicates an inward shift of the production frontier. If the technology change is less than one, it must be that the distance function in the first period is less than the distance function in the next period. Thus, the distance in the first period is farther away from one than is the distance in the next period, and the distance from the frontier decreased from the first period to the second period. Since the output and input bundles did not change, the frontier must shift in to produce the decrease in distance. The Internal Revenue Service (IRS) can follow this method to generate indexes for the areas and territories and then focus on the average for an estimate of overall IRS productivity. ,y)= [max { φ | (x, φy) ∈T}]-1 and φ* = (D(x,y))-1, with φ* >1 and D(x,y)< 1, where φ denotes the value to scale output. therefore, inefficient relative to firms with a scalar value of one. Thus, output distance functions are less than one. IRS can use this method, treating territories and areas as firms. The weights used in the linear program are designed to make each firm look its best; they represent best case scenarios. While DEA is a nonparametric method, there is also a parametric method available called stochastic frontier analysis. Stochastic frontier analysis (regression) uses a regression model to estimate cost or production efficiency. After running the regression of performance and input data, the frontier is found by decomposing the residuals into a stochastic (statistical noise) part and a systematic portion attributed to some form of inefficiency. Stochastic frontier analysis thus requires specifying the distributional form of the errors and the functional form of the cost (or production) function. Its merits include a specific treatment of noise. While DEA’s use of nonparametric methods eliminates the need to specify functional forms, one drawback is a susceptibility to outliers.
In the past, the Internal Revenue Service (IRS) has experienced declines in enforcement productivity as measured by cases closed per Full Time Equivalent. Increasing enforcement productivity through a variety of enforcement improvement projects is one strategy being pursued by IRS. Evaluating the benefits of different projects requires good measures of productivity. In addition, IRS's ability to correctly measure its productivity has important budget implications. GAO was asked to illustrate available methods to better measure productivity at IRS. Specifically, our objectives were to (1) describe challenges that IRS faces when measuring productivity, (2) describe alternative methods that IRS can use to improve its productivity measures, and (3) assess the feasibility of using these alternative methods by illustrating their use with existing IRS data. Measuring IRS's productivity, the efficiency with which inputs are used to produce outputs, is challenging. IRS's output could be measured in terms of impact on taxpayers or the activities it performs. IRS's impacts on taxpayers, such as compliance and perceptions of fairness, are intangible and costly to measure. IRS's activities, such as exams or audits conducted, are easier to count but must be adjusted for complexity and quality. An increase in exams closed per employee would not indicate an increase in productivity if IRS had shifted to less complex exams or if quality declined. IRS can improve its productivity measures by using a variety of methods for calculating productivity that adjust for complexity and quality. These methods range from ratios using a single output and input to methods that combine multiple outputs and inputs into composite indexes. Which method is appropriate depends on the purpose for which the productivity measure is being calculated. For example, a single ratio may be useful for examining the productivity of a single simple activity, while composite indexes can be used to measure the productivity of resources across an entire organization, where many different activities are being performed. Two examples show that existing data, even though they have limitations, can be used to produce a more complete picture of productivity. For individual exams, composite indexes controlling for exam complexity show a larger productivity decline than the single ratio method. On the other hand, for exams performed in the Large and Mid-Size Business (LSMB) division, the single ratio understates the productivity increase shown, after again controlling for complexity. By using alternative methods for measuring productivity, managers would be better able to isolate sources of productivity change and manage resources more effectively. More complete productivity measures would provide better information about IRS effectiveness, budget needs, and efforts to improve efficiency.
USDA’s CBAs provide services such as farm loans and conservation assistance along with rural and economic development help. Under the E- File Act, these agencies were required to establish an electronic filing and retrieval system enabling farmers and other agricultural producers to access departmental forms, such as farm loan applications, via the Internet by December 18, 2000. Along with the December 18 deadline for establishing the forms-retrieval system, the Secretary of Agriculture was also to report to Congress by that date on progress made in implementing the act. The law further mandated that by June 20, 2002, agricultural producers be able to file paperwork electronically with USDA if they choose to do so. RMA administers the federal crop insurance program, which helps protect producers against losses due to drought, flooding, and other unavoidable causes. By December 1, 2000, it was to submit to Congress a plan for enabling producers to obtain forms and information, such as crop insurance applications and production and yield reports, over the Internet. Implementation of the plan is to be completed by December 1, 2001. In addition to addressing the mandates of the Freedom to E-File Act, USDA is—like other agencies—preparing to implement plans in accordance with the GPEA. To develop and implement USDA’s e-file capabilities, on August 30, 2000, the Secretary of Agriculture issued a memorandum to the undersecretaries of the affected agencies and assigned them and mission area leaders with collective responsibility. The Secretary’s memorandum also required that one shared plan be developed and implemented to meet E-File Act requirements. The Secretary gave the Office of Chief Information Officer (OCIO) the role of coordinating and facilitating e-file implementation planning and required that the plan be submitted to OCIO no later than September 30, 2000. In response to the Secretary’s directive, the undersecretaries transmitted a “Mission Area Report on Freedom to E-File Legislation” to OCIO on October 17, 2000. While providing general information, it lacked specifics on activities and milestones, dependencies among USDA activities, and needed resources. It also did not assign a senior-level official with overall accountability for managing and ensuring the implementing disparate e-file activities. To address their e-file requirements, the CBAs had two separate interagency teams working together to meet the act’s December 18, 2000, deadline—one consolidating and developing electronic forms and the other building a technical infrastructure for expanding Internet use. For example, the CBAs purchased an on-line common forms software tool for creating electronic forms, selected forms to post on the Web, and designed and begun implementing a common Internet Web site. At the time of our November briefing, the CBAs still needed to obtain OMB approval for each electronic form, complete testing of the Web page, train county-based field staff in the new e-file procedures, publicize the department’s e-file services, and notify the public on how to use them. The CBAs progress in meeting the E-File Act’s December 18 deadline was discussed in a report to Congress signed by the Secretary of Agriculture on December 22, 2000. Our review found that, by the December 18 deadline, the CBAs had successfully established a common Internet Web site, obtained OMB approval for 52 FSA and NRCS forms, and placed them on the Internet. However, none of the 100-plus RD forms that the CBAs expected to have deployed on the Web site by the deadline were available. According to USDA documentation, OMB had not approved these forms because some forms appeared to be for the department’s internal use, did not have clear and user-friendly instructions, or did not include forms instructions that conformed to the format standard established by OMB and the agencies. As of December 31, 2000, RD had not resubmitted any of its forms for OMB review. A marketing brochure, being developed by the CBAs to promote public awareness of the e-file effort, was still in production at the end of December 2000, and the CBAs had decided not to issue any press releases publicizing the new e-file Web site. In addition, because the CBAs decided that training needs for service center employees were minimal at this phase of the project, informational directives on the new program were provided to employees in lieu of giving them training. Fully meeting all remaining e-file mandates to successfully establish effective and secure electronic filing capabilities by June 2002 poses far more complex and difficult tasks for USDA, such as reengineering business processes and establishing reliable and secure methods of transmitting and storing all electronic records. Moreover, since the E-File Act requires USDA to continue providing services through non electronic means as well, the department will also face increased workload demands supporting dual service delivery functions—one electronic and one paper-based. At the end of our review in December 2000, USDA did not have a detailed plan for how it would implement these actions and had not identified how much funding or what staff resources would be required to carry out them out. In response to the E-File Act, RMA began work on the required December 1, 2000, plan for allowing agricultural producers the option of obtaining, over the Internet from approved insurance providers, all forms and other information, and filing all paperwork required for participation electronically. RMA’s initial efforts focused on establishing and distributing guidelines and policies for crop insurance providers to follow in meeting their e-file responsibilities. By the time of our November briefing, RMA had issued its final e-file guidelines. RMA met the December 1, 2000, deadline for submitting a plan to Congress. This plan outlines the process the agency will use to ensure that insurance providers comply with the act’s e-file requirements. Specifically, insurance providers must follow RMA’s issued guidelines and submit a completed e-business plan to RMA for approval no later than April 1, 2001. RMA said it expects full implementation, as the act requires, by December 1, 2001. USDA has made progress and has partially met initial E-File Act deadlines for providing agricultural producers with access to forms via the Internet and submitting required reports on initial e-file activities and plans to Congress. However, implementing full e-filing capabilities for all its farm service customers by the deadlines set by the act poses a far more complex and difficult challenge. A component critical to the success of any such initiative is the necessary authority and responsibility to manage it across different departmental entities, yet no single official has been so designated. Also, a comprehensive implementation plan—one that addresses both GPEA, the Freedom to E-File Act requirements, and OMB’s implementation guidelines—is critical to help the department achieve a more consistent approach in its entire e-government transformation. To ensure that USDA fully meets its E-File Act mandates, we recommend that the Secretary of Agriculture assign a senior-level official with overall responsibility, authority, and accountability for managing and carrying out implementation for both CBAs and RMA E-File Act requirements; direct the assigned official to work with RD and OMB to expedite resubmission and approval of all appropriate RD forms and ensure that these forms are made available over the Internet as soon as possible; and direct the assigned official, in cooperation with the undersecretaries for Farm and Foreign Agricultural Service, Natural Resources and Environment, and Rural Development, and the OCIO, to develop and document a comprehensive plan for implementing all E-File Act requirements. In developing the department’s comprehensive plan, we further recommend that the Secretary of Agriculture direct the assigned senior official to document and track all critical activities and milestones, dependencies among major activities, and resources needed to complete these efforts. In addition, the plan should clearly describe all project tasks, their priorities, and time frames and milestones for their completion; assign task responsibilities to staff and show critical dependencies identify required staff/budget resources for completing the plan; and document contingency actions planned to address unforeseen work delays or problems. We also recommend that the Secretary direct the assigned senior official to include both e-file and GPEA requirements in the department’s comprehensive plan to help better coordinate actions across USDA agencies and apply a consistent approach for addressing all mandated requirements and deadlines during USDA’s e-file government transformation. Finally, we recommend that the Secretary hold the senior official accountable for carrying out the comprehensive plan and require that this official provide quarterly reports to the Secretary describing the results of USDA’s efforts to implement each of these actions and all e-file requirements. On November 8, 2000, we provided a copy of our briefing materials, which were used in preparing this report, to USDA’s CIO, deputy CIO, and officials representing USDA’s CBAs and RMA. These officials generally agreed with our briefing. They stated that providing more focused leadership and having a comprehensive implementation plan for the e-file effort would increase the department’s overall chances of success with fully implementing the Freedom to E-File Act. In its December 22, 2000, progress report to Congress, and consistent with our recommendations, USDA said that it plans to begin an effort to develop comprehensive project plans for enhanced services that meet the 2-year requirements of the Freedom to E-File Act and GPEA in January 2001. On February 13, 2001, USDA’s Acting CIO provided written comments on a draft of this report. USDA’s comments are summarized below and reproduced in appendix II. USDA said that it fully supports the spirit and intent, as well as the legal mandates, of the e-file act. USDA agreed that it had problems meeting the act’s initial December 18, 2000, deadline for deploying electronic forms on the Internet and that significant challenges remain to fully implement the act. USDA also stated that successful implementation of the act will require continued funding, along with understanding and support of the USDA’s programs or “owners” of the business being transformed. USDA agreed with our recommendation for making RD forms available on the Internet as soon as possible. The department also agreed that comprehensive plans must be developed for implementing the Freedom to E-File Act and GPEA to help better coordinate across agencies and apply a consistent approach for addressing USDA’s e-transformation. However, the department stopped short of describing the extent to which its comprehensive plan will include all the detailed steps we recommended or what the department’s time frame is for completing it. Moreover, it was unclear from USDA’s response whether the department planned to implement our recommendation for assigning a senior-level official with overall e-file responsibility, authority, and accountability for managing and carrying out implementation of the E-File Act requirements. With respect to our last recommendation for providing the Secretary quarterly reports on implementation results, USDA stated that OCIO will continue to ensure that the Secretary is fully informed on the department’s progress in meeting the E-File Act requirements. However, the department did not specify when and how progress will be reported to the Secretary nor did it describe how accountability for results will be ensured. We continue to believe that having a senior-level official vested with sufficient accountability and authority is important to the success of USDA’s e-file implementation efforts. As requested, our objective was to review measures being taken by the department to implement the provisions of the Freedom to E-File Act. In carrying out our work, we obtained and reviewed USDA and contractor documents and discussed actions planned or under way with department officials handling implementation of the act and assessed progress made. On November 15, 2000, we briefed your staff on the results of our review up to that point. Our work on the briefing was performed from August through October 2000. We performed follow-up work to update USDA’s progress implementing the act through December 31, 2000. The results of all of our work are summarized in this report. We conducted our review at USDA headquarters in Washington, D.C., and at key agency offices involved in e- file activities in Fort Collins, Colorado, and Kansas City, Missouri. Our work was done in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from its date. At that time, we will send copies to Representative Eva Clayton, Ranking Minority Member, Subcommittee on Department Operations, Oversight, Nutrition, and Forestry, House Committee on Agriculture; Senator Richard Lugar, Chairman, and Senator Tom Harkin, Ranking Member, Senate Committee on Agriculture, Nutrition, and Forestry; Representative Larry Combest, Chairman, and Representative Charles Stenholm, Ranking Minority Member, House Committee on Agriculture; Representative Tom Davis, Chairman, Representative Jo Ann Davis, Vice Chairwoman, and Representative Jim Turner, Ranking Minority Member, Subcommittee on Technology and Procurement Policy, House Committee on Government Reform; and Representative Stephen Horn, Chairman, Representative Ron Lewis, Vice Chairman, and Representative Janice Schakowski, Ranking Minority Member, Subcommittee on Government Efficiency, Financial Management and Intergovernmental Relations, House Committee on Government Reform. We will also send copies to the Honorable Ann M. Veneman, Secretary of Agriculture; the Honorable Mitchell E. Daniels, Jr., Director, Office of Management and Budget; and other interested parties. Copies will also be made available to others upon request. Should you have any questions on matters discussed in this report, please contact me at (202) 512-6257 or Stephen A. Schwartz, Senior Assistant Director, at (202) 512-6213. We can also be reached by e-mail at mcclured@gao.gov and schwartzs@gao.gov, respectively. Conservation Service (NRCS), and Rural Development (RD) mission area--hereafter referred to collectively as County- based Agencies (CBAs), and the Risk Management Agency (RMA) to establish an electronic filing and retrieval system enabling farmers and others to file paperwork electronically. Provides separate requirements for the CBAs and RMA. an Internet-based system enabling agricultural producers to access all forms and shall submit to Congress a report that describes the progress made. Not later than June 20, 2002, the system shall be expanded to enable producers to access and file all forms and, at the option of the Secretary, selected records and information. Congress a plan to allow agricultural producers the option of obtaining, over the Internet from approved insurance providers, all forms and other information and filing all paperwork required for participation electronically. Not later than December 1, 2001, RMA should complete implementation of the plan. *RMA administers the federal crop insurance program, under which insurance policies are sold and serviced by private companies to help protect agricultural producers against crop losses due to drought, flooding, and other unavoidable causes. 105-277, October 28, 1998)* requires that by 2003, federal agencies provide the public, where practicable, the option of submitting, maintaining, or disclosing information--such as employment records, tax forms, and loan applications-- electronically, instead of on paper. On the basis of guidance issued by the Office of Management and Budget (OMB), agencies are preparing plans for implementing GPEA, including the use of electronic signatures. *P.L. No. 105-277, Div.C, tit XVII. E-File Act, the Secretary issued a memorandum to the undersecretaries for the affected agencies, requiring that one shared plan be developed and implemented to meet E- File Act requirements and the plan be submitted to Office of Chief Information Officer (OCIO) no later than September 30, 2000. Secretary assigned the undersecretaries and mission area leaders with collective responsibility for developing and implementing e-file activities and gave OCIO the role of coordinating/facilitating e-file implementation planning. In response to the Secretary’s memorandum, the undersecretaries approved and transmitted the “Mission Area Report on Freedom to E-File Legislation” to the CIO on October 17, 2000. The report, which included two separate sections covering the CBAs and RMA, provides general information on USDA activities to address e-file requirements. establish and define all major activities and milestones, dependencies among activities, and resources necessary to complete them or assign a senior-level official with overall responsibility and accountability for managing and implementing all the separate e-file activities. purchased an on-line common forms software tool for selected a total of 219 forms (57 from FSA, 6 from NRCS, and 156 from RD) to post on the Web (USDA believes customers can complete these on their own with assistance only from form completion instructions), coordinated with OMB to develop a user-friendly format and are working to obtain OMB approval for each new electronic form, and designed and is implementing a common Internet Web site that can utilize a single Internet address to provide user access with common search and retrieval functions for all available forms. to be done to meet the December 18 deadline. For example, CBAs still need to obtain final OMB approval for each of the contractor testing of the Web page design, which is not addressed in the October 17 report, was scheduled to be completed December 15, the last workday before the deadline, and some key technical staff who were implementing web farm hardware, software, and security had other full-time duties and had no replacements should they be assigned elsewhere. plans still needed to be established to train county-based field office staff in the new e-filing procedures and in providing customer assistance and publicizing USDA’s new e-file services and notifying the public on how to use them still needed to be done. USDA officials working on these activities believe that these tasks can be accomplished by the December 18 deadline. establishing full e-government services across a broad range of USDA building in solutions that also address GPEA and OMB requirements. reengineering numerous existing programs and systems, using multiple electronic submission processes to accommodate various categories of agency customers, designing and investing in technology to securely connect service center agencies to customers and to USDA’s national network, developing software to move and utilize data collected from customers to appropriate serving locations, and training employees in new roles, responsibilities, and technologies. workload increases. E-File Act requires USDA to continue providing services in the traditional way to customers who choose not to use the Internet. CBAs must therefore support dual service delivery functions--one electronic and one paper-based. identified how much funding and staff resources will be needed to fully implement the act. The E-File Act provides that the Secretary is to reserve, from applicable accounts of the CBAs, not more than $3 million for fiscal year 2001 and $2 million for each subsequent fiscal year. Decisions on use of these accounts and funding are still pending. planning on how USDA will carry out the tasks needed to meet the June 20, 2002, deadline for implementing full electronic filing capabilities will not begin until January 2001. In response to the E-File Act, RMA is working on a December 1, 2000, plan for allowing agricultural producers the option of obtaining, over the Internet from approved insurance providers, all forms and other information and filing all paperwork required for participation electronically. *These were issued in final on November 1, 2000. the location and type of data made available where paperwork can be filed and responsibilities of the applicable parties. RMA said it approved plans from two participating insurance providers on September 26, 2000, enabling them to market and service federal crop insurance programs over the Internet. File Act, and the CBAs’ and RMA’s October 17 report, done in response to the Secretary’s request for a shared e-file plan, generally discusses their actions. However, several steps essential to the overall success of USDA’s e-file initiative remain to be done. Specifically, USDA has not assigned a senior-level official with overall responsibility, authority, and accountability for managing and implementing all the separate activities to ensure that critical tasks are completed on time and within budget and that all federal mandates are met. USDA has also not yet developed and documented a comprehensive e-file implementation plan. Having such a plan is important to define the milestones for all major activities, dependencies and critical tasks among these activities, and resources required to complete them; help identify priorities as to which activities must be completed first and where milestone and resource shifts may be made to ensure that the most critical activities are completed on time, within budget, and, more important, are successful; and address OMB and GPEA requirements by coordinating actions across mission areas and applying a more consistent approach during USDA’s e-government transformation. To ensure that USDA fully meets its E-File Act mandates, the assign a senior-level official with overall responsibility, authority, and accountability for managing and carrying out implementation of all E-File Act requirements and direct that the assigned senior-level official, in cooperation with the undersecretaries for Farm and Foreign Agricultural Service, Natural Resources and Environment, and Rural Development, and the CIO, develop and document a comprehensive plan for implementing all E-File Act requirements. describe all project tasks, their priority, and time frames and milestones for their completion; assign task responsibilities to staff and show critical dependencies identify required staff/budget resources for completing the plan; and document contingency actions planned to address unforeseen work delays or problems. In addition, the comprehensive plan should cover both e-file and GPEA requirements to help better coordinate actions across USDA agencies and apply a consistent approach for addressing all mandated requirements and deadlines during USDA’s e- government transformation. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system)
The Department of Agriculture (USDA) has made progress in implementing the Freedom to E-File Act and has partially met the act's initial deadlines for providing agricultural producers with access to forms via the Internet and submitting required reports on initial e-file activities and plans to Congress. However, implementing full e-filing capabilities for all its farm service customers by the deadlines set by the act poses a far more complex and difficult challenge. Critical to the success of any such initiative is the necessary authority and responsibility to manage it across different departmental entities. Yet no single official has been so designated. Also, a comprehensive implementation plan--one that addresses both the Government Paperwork Elimination Act (GPEA), the Freedom to E-File Act requirements, and Office of Management and Budget's (OMB) implementation guidelines--is critical to help USDA achieve a more consistent approach in its entire e-government transformation.
VA provides health care services to various veteran populations— including an aging veteran population and a growing number of younger veterans returning from the military operations in Afghanistan and Iraq. VA operates 152 hospitals, 133 nursing homes, 824 community-based outpatient clinics, and other facilities to provide care to veterans. In general, veterans must enroll in VA health care to receive VA’s medical benefits package—a set of services that includes a full range of hospital and outpatient services, prescription drugs, and long-term care services provided in veterans’ own homes and in other locations in the community. VA also provides some services that are not part of its medical benefits package, such as long-term care provided in nursing homes. To meet the expected demand for health care services, VA develops a budget estimate each year of the resources needed to provide these services. This budget estimate includes the total cost of providing health care services, including direct patient costs as well as costs associated with management, administration, and maintenance of facilities. VA develops most of its budget estimate using the EHCPM. The EHCPM’s estimates are based on three basic components: the projected number of veterans who will be enrolled in VA health care, the projected quantity of health care services enrollees are expected to use, and the projected unit cost of providing these services. The EHCPM makes these projections 3 or 4 years into the future for budget purposes based on data from the most recent fiscal year. For example, in 2010, VA used data from fiscal year 2009 to develop its health care budget estimate for the fiscal year 2012 request and advance appropriations request for 2013. VA uses other methods to estimate needed resources for long-term care, other services, and health-care-related initiatives proposed by the Secretary of VA or the President. As previously reported, these methods estimate needed resources based on factors that may include historical data on costs and the amount of care provided, VA’s policy goals for health care services such as long-term care, and predictions of the number of users. For example, VA’s projections for long-term care for fiscal year 2012 were based on fiscal year 2010 data on the amount of care provided and the unit cost of providing a day of this care. Typically, VA’s Veterans Health Administration (VHA) starts to develop a health care budget estimate approximately 10 months before the President submits the budget to Congress in February. The budget estimate changes during the 10-month budget formulation process in part due to successively higher levels of review in VA and OMB before the President’s budget request is submitted to Congress. For example, the successively higher levels of review resulting in the fiscal year 2012 President’s budget request are described in table 1. The Secretary of VA considers the health care budget estimate developed by VHA when assessing resource requirements among competing interests within VA, and OMB considers overall resource needs and competing priorities of other agencies when deciding the level of funding requested for VA’s health care services. OMB issues decisions, known as passback, to VA and other agencies on the funding and policy proposals to be included in the President’s budget request. VA has an opportunity to appeal the passback decisions before OMB finalizes the President’s budget request, which is submitted to Congress in February. Concurrently, VA prepares a congressional budget justification that provides details supporting the policy and funding decisions in the President’s budget request. Each year, Congress provides funding for VA health care through the appropriations process. For example, Congress provided new appropriations of about $48.0 billion for fiscal year 2011 and advance appropriations of $50.6 billion for fiscal year 2012 for VA health care. In addition to new appropriations that VA may receive from Congress as a result of the annual appropriations process, funding may also be available from unobligated balances from multiyear appropriations, which remain available for a fixed period of time in excess of 1 fiscal year. For example, VA’s fiscal year 2011 appropriations provided for some amounts to be available for 2 fiscal years. These funds may be carried over from fiscal year 2011 to fiscal year 2012 if they are not obligated by the end of fiscal year 2011. VA and OMB consider anticipated unobligated balances when formulating the President’s budget request. VA has statutory authority to collect amounts from patients, private insurance companies, and other government entities to be obligated for health care services. VA collects first-party payments from veterans, such as copayments for outpatient medications, and third-party payments from veterans’ private health insurers for deposit into the Medical Care Collections Fund (MCCF). Amounts in the MCCF are available without fiscal year limitation for VA health care and expenses of certain activities related to collections subject to provisions of appropriations acts. VA also receives reimbursements from services it provides to other government entities, such as the Department of Defense (DOD), or to private or nonprofit entities. For example, in 2006, we reported that VA received reimbursements from other entities by selling laundry services. These amounts also contribute to decisions on funding in the President’s budget request. Congress provides funding for VA health care through three appropriations accounts: Medical Services, which funds health care services provided to eligible veterans and beneficiaries in VA’s medical centers, outpatient clinic facilities, contract hospitals, state homes, and outpatient programs on a fee basis; Medical Support and Compliance, which funds the management and administration of the VA health care system, including financial management, human resources, and logistics; and Medical Facilities, which funds the operation and maintenance of the VA health care system’s capital infrastructure, such as costs associated with nonrecurring maintenance, utilities, facility repair, laundry services, and grounds keeping. Funding was appropriated for fiscal year 2012 for the three accounts in the following proportions: Medical Services at 78 percent, Medical Support and Compliance at 11 percent, and Medical Facilities at 11 percent. VA identified several changes that were made to its budget estimate to help develop the President’s budget request for VA for fiscal years 2012 and 2013. In one change, VA identified that the resources identified in its budget justification for non-recurring maintenance (NRM) were lower than the amount estimated using the EHCPM by $904 million for fiscal year 2012 and $1.27 billion for fiscal year 2013. Funds for NRM are used to repair and improve VA health care facilities and come from the Medical Facilities appropriations account. The President’s budget request reflected resource levels of $869 million for NRM for fiscal year 2012 and $600 million for fiscal year 2013. OMB staff said that amounts identified for NRM in VA’s congressional budget justification were lower than estimated amounts due to a policy decision to fund other initiatives and to hold down the overall budget request for VA health care without affecting the quality and timeliness of VA’s health care services. VA officials said NRM amounts that were identified for fiscal years 2012 and 2013 should be sufficient to maintain VA health care facilities in their current conditions. In recent years, VA’s spending on NRM has been greater than the amounts identified in VA’s budget justifications and reflected in the President’s budget requests (see table 2). The higher spending is consistent with VA’s authority to increase or decrease the amounts VA allocates from the Medical Facilities account for NRM and with congressional committee report language. While VA’s NRM spending has exceeded amounts identified in VA’s budget justifications over the last several years, VA’s projection of the NRM backlog for health care facilities—which reflects the total amount needed to address facility deficiencies—has increased to nearly $10 billion. Changes also were made to EHCPM estimates for health care equipment. For equipment purchases, VA identified that the resource request in its budget justification was $15 million lower than the amount estimated using the EHCPM for fiscal year 2012 and $410 million lower than the amount estimated using the EHCPM for fiscal year 2013. The President’s budget reflected a request of $1.034 billion for fiscal year 2012 and $700 million for fiscal year 2013 to purchase health care equipment. OMB staff said amounts identified for equipment were lower than estimated amounts due to a policy decision to fund other initiatives and to hold down the overall budget request for VA health care without affecting the quality and timeliness of VA’s health care services. In addition, estimates of resource needs for employee salaries were reduced due to the enactment of a law requiring the elimination of across- the-board pay raises for federal employees in 2011 and 2012. This 2-year pay raise freeze led to a reduction of $713 million for fiscal year 2012 and $815 million for fiscal year 2013 from VA’s health care budget estimate. The amount of the reduction was calculated separately from the EHCPM because the EHCPM does not have an explicit assumption for pay increases. VA officials said that OMB staff calculated the impact on the President’s budget request for VA health care for fiscal year 2013. The lower salary base that resulted from the pay freeze in 2011 and 2012 also would reduce the overall salary level for fiscal year 2013. According to VA’s budget justification, VA’s health care budget estimate was further reduced by $1.2 billion for fiscal year 2012 and by $1.3 billion for fiscal year 2013 to reflect expected savings from what VA identified as six operational improvements. Expected savings from these operational improvements are a result of planned changes in the way VA manages its health care system to lower costs. The operational improvements for fiscal years 2012 and 2013 from VA’s budget justification are the following: Acquisitions. The operational improvements with the largest amount of estimated cost savings are VA’s proposed changes to its purchasing and contracting strategies for which VA estimates a savings of $355 million a year for fiscal years 2012 and 2013. For example, VA has proposed savings by increasing competition for contracts that were formerly awarded on a sole-source basis. Changing rates. VA proposed to purchase dialysis treatments and other care from civilian providers at Medicare rates instead of current community rates. VA estimates a savings of $315 million for fiscal year 2012 and $362 for fiscal year 2013 as a result of this rate change. Fee care. VA proposed initiatives to generate savings from health care services that VA pays contractors to provide. VA estimates a savings of $200 million a year for fiscal years 2012 and 2013 from reductions in its payments for fee-based care. Realigning clinical staff and resources. VA proposed to realign clinical staff and resources to achieve savings by using less costly health care providers. Specifically, VA plans to use selected non- physician providers instead of certain types of physicians, use selected licensed practical nurses instead of certain types of registered nurses, and more appropriately align required clinical skills with patient care needs. VA estimates a savings of $151 million a year for fiscal years 2012 and 2013 from clinical staff and resource realignment. Medical and administrative support. VA proposed to employ resources more efficiently in various medical care, administrative, and support activities at each medical center and in other VA locations. For example, a VA official said that VA could examine job vacancies for medical and administrative support to see whether vacant positions need to be filled. VA estimates a savings of $150 million a year for fiscal years 2012 and 2013 for this operational improvement. VA real property. VA proposed initiatives to repurpose its vacant or underutilized buildings, demolish or abandon other vacant or underutilized buildings, decrease energy costs, change procurement practices for building supplies and equipment, and change building- service contracts. VA estimates a savings of $66 million a year for fiscal years 2012 and 2013 from real property initiatives. In the past, VA has proposed management efficiencies to achieve savings in order to reduce the amount of funding needed to provide health care services. However, in a 2006 report, we reported that VA lacked a methodology for its assumptions about savings estimates it had detailed for fiscal years 2003 through 2006, and we concluded that VA may need to take actions to stay within its level of available resources if VA fell short of its savings goals. According to VA officials, VA is planning to develop a system to monitor the operational improvements to determine whether they were generating the intended savings. VA’s health care budget estimate was increased overall by about $1.4 billion for fiscal year 2012 and $1.3 billion for fiscal year 2013 to support health-care-related initiatives proposed by the administration, according to VA officials. Some of the proposed initiatives can be implemented within VA’s existing authority, while other initiatives would require a change in law. VA officials estimated that the majority of initiatives would increase resource needs for new health care services or expanded existing services. Four initiatives which make up over 80 percent of the total amount for initiatives in the President’s budget request are: Homeless veterans programs. VA officials estimated that this initiative, which supports the agency’s goal to end homelessness among veterans, would increase VA’s resource needs by $460 million for fiscal year 2012 as well as for fiscal year 2013. This would allow VA to expand existing programs and develop new ones to prevent veterans from becoming homeless and help those veterans who are currently homeless, programs such as assisting veterans with acquiring safe housing, receiving needed health care services, and locating employment opportunities. Opening new health care facilities. This initiative would provide VA with the resources to purchase equipment and supplies and complete other activities that are necessary to open new VA health care facilities and begin providing health care services to veterans. VA officials estimated that this initiative would increase VA’s resource needs by $344 million for fiscal year 2012 as well as for fiscal year 2013. Additional services for caregivers. This initiative would give VA the resources to expand services to caregivers of the most severely wounded veterans returning from Afghanistan and Iraq, as required by the Caregivers and Veterans Omnibus Health Services Act of 2010. For example, this initiative would provide caregivers a monthly stipend and eligibility to receive VA health care benefits. To provide these additional services to caregivers, VA officials estimated that the agency’s resource needs would increase by $208 million for fiscal year 2012 and $248 million for fiscal year 2013. Benefits for veterans exposed to Agent Orange. This initiative would provide VA with the resources to implement activities required by the Agent Orange Act of 1991 that directs the Secretary of VA to extend health care benefits to veterans with certain conditions, such as some types of leukemia, who were known to be exposed to Agent Orange and to issue regulations establishing presumptions of service connection for diseases that the Secretary finds to be associated with exposure to an herbicide agent. VA officials estimated that to provide these additional benefits, its resource needs would increase by $171 million for fiscal year 2012 and $191 million for fiscal year 2013. VA officials estimated a small number of initiatives in the President’s budget request would decrease VA’s spending needs. These initiatives propose ways for VA to reduce costs. For example, the Medicare ambulatory rates initiative proposes that Congress amend current law to allow VA to reimburse vendors for certain types of transportation, such as ambulances, at the local prevailing Medicare ambulance rate in the absence of a contract. VA expects that by paying transportation vendors the Medicare rate over their current billing rate—which VA reported may be up to three to four times the Medicare rate—VA’s resource needs related to certain types of transportation would decrease by about $17 million for fiscal year 2012 as well as for fiscal year 2013. VA’s overall estimate for long-term care and other services was reduced, according to VA officials and OMB staff, to reflect more current data that became available during the 10-month budget formulation process. To meet OMB’s timeline for preparing the President’s budget request, VA initially produced estimates for long-term care and CHAMPVA services in May 2010. These estimates were based on a mix of available data representing the actual amount of care provided and unit costs for these services to-date and projections for these services for the remainder of the 2010 fiscal year. VA had to project data because only partial-year data were available in May. Between May and November 2010, VA provided OMB with periodic updates of the most current data available. OMB staff, with input from VA officials, finalized the estimate for the President’s budget request using this information, which according to VA officials and OMB staff, resulted in a lower estimate overall for long-term care and other services than the estimate VA produced in May 2010. VA, however, did not provide us with the amount of the decrease in the estimate. According to VA officials, VA’s health care budget estimate was increased by $420 million for fiscal year 2012 and by $434 million for fiscal year 2013 to account for the costs of providing health care to non-veterans, including active duty service members and other DOD beneficiaries under sharing agreements, and certain VA employees who are not enrolled as veterans. Since VA’s estimates from the EHCPM are based on the cost of treating veterans, the agency developed the estimates for providing health care to non-veterans separately. VA’s estimate from the EHCPM was also increased by $220 million for fiscal year 2012 to reflect enhancements for rural health care for veterans, according to VA officials. Congress directed VA to spend $250 million on enhancements for rural health care in fiscal year 2009, and VA made a policy decision to continue spending this amount on enhancements for rural health care in subsequent years, according to VA officials. However, VA was not able to spend the entire $250 million in fiscal year 2009 and spent only $30 million. Since VA used data from fiscal year 2009 in the EHCPM to develop its health care budget estimate for fiscal year 2012, VA’s estimate projected $30 million in spending for enhancements for rural health care for that fiscal year. As a result, VA’s estimate was increased by about $220 million to reflect the agency’s planned $250 million spending for this policy change. The President’s request for appropriations for VA health care for fiscal years 2012 and 2013 relied on anticipated funding from several sources. Of the $54.9 billion requested by the President for fiscal year 2012 to fund VA’s health care services, $50.9 billion was requested in new appropriations. This request was an increase of 5.5 percent from the amount requested for fiscal year 2011—the lowest requested percent increase in recent years. The request assumes the availability of about $4.0 billion from collections, unobligated balances of mulitiyear appropriations, and reimbursements. Similarly, of the $56.7 billion requested by the President for fiscal year 2013, $52.5 billion was requested in new appropriations—an increase of 3.3 percent from the fiscal year 2012 request. About $4.1 billion was expected to be available from other funding sources. (See table 3.) VA estimates the amount of funding from these other sources as part of its congressional budget justification supporting the President’s request. As table 3 shows, the President’s budget request assumes that VA will collect about $3.1 billion for fiscal year 2012 and $3.3 billion for fiscal year 2013. These funds are from health insurers of veterans who receive VA care for nonservice-connected conditions, as well as from other sources, such as veterans’ copayments. VA has the authority to retain these collections in the MCCF and may use them without fiscal year limitation for providing VA medical care and services and for paying departmental expenses associated with the collections program. According to VA officials, VA reduced its $3.7 billion estimate for collections included in the fiscal year 2012 advance appropriations request by approximately $600 million for the fiscal year 2012 President’s budget request. VA officials said that because of the depressed economy, fewer enrollees have comprehensive health insurance that VA can bill for third party payments for services that VA provides. In addition, even if enrollees do have health insurance that VA can bill, insurance companies are increasingly reducing payment amounts to levels stipulated in the insurers’ own policies. Finally, because the enrollee population is aging, the percentage of enrollees who are Medicare beneficiaries is rising. As a result, VA is increasingly limited to billing enrollees’ Medicare Supplement Insurance policies, because fewer enrollees have full health insurance policies that VA can bill. The President’s budget request also assumes that VA will have unobligated balances left from fiscal years 2011 and 2012 totaling $1.1 billion to obligate in fiscal years 2012 and 2013. Specifically, VA proposes to carry over $600 million of the funds left from fiscal year 2011 to obligate in fiscal year 2012 and to carry over $500 million of the funds left from fiscal year 2012 to obligate in fiscal year 2013. VA assumes that Congress will provide some multiyear funding and thus, VA will be able to carry over any unobligated balances from one fiscal year to the next fiscal year. The fiscal year 2011 full-year continuing resolution provided that $1.2 billion would be available for 2 fiscal years, so VA has the ability to use unobligated balances in fiscal year 2012, including the $600 million proposed, if that amount remains available. If the fiscal year 2012 appropriations also provide funding that is available for 2 fiscal years, VA would be able to carry over the $500 million in unobligated balances, if available, from fiscal year 2012 into fiscal year 2013 as proposed. The President’s budget request also assumes that VA will receive $343 million and $358 million in reimbursements for fiscal years 2012 and 2013, respectively, from services it provides to other government entities as well as prior year recoveries. For example, VA receives reimbursements for medical services it provides under sharing agreements with DOD, including to TRICARE beneficiaries. VA estimates that prior year recoveries will be approximately $3 million for each of the fiscal years 2012 and 2013. Of the $54.9 billion in total resources requested by the President for fiscal year 2012, $953 million represents contingency funding to be available under certain circumstances for health care services, supplies, and materials. This contingency funding would only be made available to VA through the Medical Services appropriations account if the Director of OMB concurs with the Secretary of VA’s determination that economic conditions warrant the additional funds. The Secretary’s determination would reflect an examination of national unemployment rates, the quantity of VA health care services enrollees use, and the amount of spending for VA’s health care services. According to staff at OMB, any unused contingency funds would expire at the end of the fiscal year and could not be used to fund VA health care services in future years. OMB determined that the contingency funding request for fiscal year 2012 would be the amount projected by the EHCPM with some adjustment for OMB’s economic assumptions. This amount was calculated by estimating the potential impact of a recent downturn in the economy on veterans’ use of VA health care. VA conducted an analysis of unemployment rates and their effect on enrollees’ use of VA’s health care services. VA showed that enrollees under age 65 who lost their jobs, and therefore their access to employer-sponsored health insurance, relied more heavily on VA health care services. For the first time since developing the model, VA incorporated unemployment rates into estimates developed using the EHCPM to estimate the effect of the economic downturn on VA’s needed resources. The President’s fiscal year 2012 budget request did not include contingency funding for fiscal year 2013 advance appropriations because OMB was uncertain if the increased costs VA anticipated as a result of the economic downturn would materialize. OMB staff said they planned to monitor VA’s fiscal year 2011 performance and would request contingency funding for fiscal year 2013 if needed, as part of the President’s fiscal year 2013 budget request. Budgeting for VA health care, by its very nature, is complex because assumptions and imperfect information are used to project the likely demand and cost of the health care services VA expects to provide. The complexity is compounded because most of VA’s projections anticipate events 3 to 4 years into the future. To address these challenges, VA uses an iterative, multilevel process to mitigate various levels of uncertainty not only about program needs, but also about presidential policies, congressional actions, and future economic conditions that may affect funding needs in the year for which the request is made. VA’s continuing review of estimates in this iterative process does attempt to address some of these uncertainties, and as a result, VA’s estimates may change to better inform the President’s budget request. Essential to the usefulness of these estimates, as our prior work has shown, is obtaining sufficient data, making accurate calculations, and making realistic assumptions. However, the uncertainty inherent in budgeting always remains. The President’s request for VA health care services for fiscal years 2012 and 2013 was based, in part, on reductions in VA’s estimates for certain activities that were made using the EHCPM or other methods. The changes in VA’s estimates reflected a decline in expected spending for these activities compared to what VA officials said would have been the case if the management and provision of health care services had continued unchanged. For example, VA estimated that various operational improvements would substantially reduce the costs for carrying out some activities, such as contracting and purchasing, in fiscal years 2012 and 2013. As a result of these anticipated changes, VA estimated that it would achieve savings that could be used for other purposes. However, in 2006, we reported on a prior round of VA’s planned management efficiency savings and found that VA lacked a methodology for its assumptions about savings estimates. If the estimated savings for fiscal years 2012 and 2013 do not materialize and VA receives appropriations in the amount requested by the President, VA may have to make difficult tradeoffs to manage within the resources provided. We provided a draft of this report to the Secretary of VA and the Director of OMB for comment. VA had no comments on this report. OMB provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Veterans Affairs, the Director of the Office of Management and Budget, and appropriate congressional committees. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Randall B. Williamson at (202) 512-7114 or at williamsonr@gao.gov, or Denise M. Fantone at (202) 512-6806 or at fantoned@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. In addition to the contacts named above, James C. Musselwhite and Melissa Wolf, Assistant Directors; Rashmi Agarwal; Matthew Byer; Jennifer DeYoung; Amber G. Edwards; Krister Friday; Lauren Grossman; Tom Moscovitch; Lisa Motley; Leah Probst; and Steve Robblee made key contributions to this report. Veterans’ Health Care: VA Uses a Projection Model to Develop Most of Its Health Care Budget Estimate to Inform the President’s Budget Request. GAO-11-205. Washington, D.C.: January 31, 2011. VA Health Care: Spending for and Provision of Prosthetic Items. GAO-10-935. Washington, D.C.: September 30, 2010. VA Health Care: Reporting of Spending and Workload for Mental Health Services Could Be Improved. GAO-10-570. Washington, D.C.: May 28, 2010. Continuing Resolutions: Uncertainty Limited Management Options and Increased Workload in Selected Agencies. GAO-09-879. Washington, D.C.: September 24, 2009. VA Health Care: Challenges in Budget Formulation and Issues Surrounding the Proposal for Advance Appropriations. GAO-09-664T. Washington, D.C.: April 29, 2009. VA Health Care: Challenges in Budget Formulation and Execution. GAO-09-459T. Washington, D.C.: March 12, 2009. VA Health Care: Long-Term Care Strategic Planning and Budgeting Need Improvement. GAO-09-145. Washington, D.C.: January 23, 2009. VA Health Care: Budget Formulation and Reporting on Budget Execution Need Improvement. GAO-06-958. Washington, D.C.: September 20, 2006. VA Health Care: Preliminary Findings on the Department of Veterans Affairs Health Care Budget Formulation for Fiscal Years 2005 and 2006. GAO-06-430R. Washington, D.C.: February 6, 2006. Veterans Affairs: Limited Support for Reported Health Care Management Efficiency Savings. GAO-06-359R. Washington, D.C.: February 1, 2006. VA Long-Term Care: Trends and Planning Challenges in Providing Nursing Home Care to Veterans. GAO-06-333T. Washington, D.C.: January 9, 2006. VA Long-Term Care: More Accurate Measure of Home-Based Primary Care Workload Is Needed. GAO-04-913. Washington, D.C.: September 8, 2004.
The Veterans Health Care Budget Reform and Transparency Act of 2009 requires GAO to report whether the amounts for the Department of Veterans Affairs' (VA) health care services in the President's budget request are consistent with VA's budget estimates as projected by the Enrollee Health Care Projection Model (EHCPM) and other methodologies. Based on the information VA provided, this report describes (1) the key changes VA identified that were made to its budget estimate to develop the President's budget request for fiscal years 2012 and 2013 and (2) how various sources of funding for VA health care and other factors informed the President's budget request for fiscal years 2012 and 2013. GAO reviewed documents describing VA's estimates projected by the EHCPM and changes made to VA's budget estimate that affect all services, including estimates developed using other methodologies. GAO also reviewed the President's budget request, VA's congressional budget justification, and interviewed VA officials and staff from the Office of Management and Budget (OMB). VA officials identified changes made to its estimate of the resources needed to provide health care services to reflect policy decisions, savings from operational improvements, resource needs for initiatives, and other items to help develop the President's budget request for fiscal years 2012 and 2013. For example, VA's estimate for non-recurring maintenance to repair health care facilities was reduced by $904 million for fiscal year 2012 and $1.27 billion for fiscal year 2013, due to a policy decision to fund other initiatives and hold down the overall budget request for VA health care. VA's estimates were further reduced by $1.2 billion for fiscal year 2012 and $1.3 billion for fiscal year 2013 due to expected savings from operational improvements, such as proposed changes to purchasing and contracting. Other changes had a mixed impact on VA's budget estimate, according to VA officials; some of these changes increased the overall budget estimate, while other changes decreased the overall estimate. The President's request for appropriations for VA health care for fiscal years 2012 and 2013 relied on anticipated funding from various sources. Specifically, of the $54.9 billion in total resources requested for fiscal year 2012, $50.9 billion was requested in new appropriations. This request assumes the availability of $4.0 billion from collections, unobligated balances of multiyear appropriations, and reimbursements VA receives for services provided to other government entities. Of the $56.7 billion in total resources requested for fiscal year 2013, $52.5 billion was requested in new appropriations, and $4.1 billion was anticipated from other funding sources. The President's request for fiscal year 2012 also included a request for about $953 million in contingency funding to provide additional resources should a recent economic downturn result in increased use of VA health care. Contingency funding was not included in the advance appropriations request for fiscal year 2013. Budgeting for VA health care is inherently complex because it is based on assumptions and imperfect information used to project the likely demand and cost of the health care services VA expects to provide. The iterative and multilevel review of the budget estimates can address some of these uncertainties as new information becomes available about program needs, presidential policies, congressional actions, and future economic conditions. As a result, VA's estimates may change to better inform the President's budget request. The President's request for VA health care services for fiscal years 2012 and 2013 was based, in part, on reductions to VA's estimates of the resources required for certain activities and operational improvements. However, in 2006, GAO reported on a prior round of VA's planned management efficiency savings and found that VA lacked a methodology for its assumptions about savings estimates. If the estimated savings for fiscal years 2012 and 2013 do not materialize and VA receives appropriations in the amount requested by the President, VA may have to make difficult trade-offs to manage within the resources provided. GAO is not making recommendations in this report. GAO provided a draft of this report to the Secretary of VA and the Director of OMB for comment. VA had no comments on this report. OMB provided technical comments, which GAO incorporated as appropriate.
HHS is the federal government’s principal agency responsible for protecting the health of all Americans and providing essential human services, especially for those who are least able to help themselves. The department manages more than 300 programs covering a wide spectrum of activities that include health and social science research, disease prevention, food and drug safety, health information technology, health insurance for elderly and disabled Americans (Medicare), health insurance for low-income people (Medicaid), and comprehensive health services for Native Americans. Other services provided by the department include financial assistance to low-income families, pre-school education programs such as Head Start, child abuse and domestic violence programs, substance abuse treatment and prevention programs, and programs to help older Americans, such as providing home-delivered meals. HHS has 14 operating divisions (see app. III for a description of each division) to manage its programs and administered more grant dollars than all other federal agencies combined. HHS employs about 67,000 employees and is responsible for managing a fiscal year 2005 budget of approximately $581 billion. Each year HHS handles more than a billion health care claims, supports over 38,000 research projects focusing on diseases, provides funding to treat more than 650,000 persons with serious substance abuse or mental health problems, and serves more than 900,000 pre-school children. The Centers for Medicare & Medicaid Services (CMS) is an HHS operating division responsible for administering two major health programs. It administers the Medicare program, the nation’s largest health insurance program, which covers more than 42 million Americans. This program was enacted to extend affordable health insurance coverage to the elderly and was later expanded to cover the disabled. In partnership with the states, CMS also administers Medicaid, a means-tested health care program for low-income Americans. Medicaid is the primary source of health care for a large population of medically vulnerable Americans, including poor families, the disabled, and persons with developmental disabilities requiring long-term care. In coordination with the Medicaid program, the State Children’s Health Insurance Program provides health care coverage for children. CMS employs about 4,900 employees and has a fiscal year 2005 budget of approximately $480 billion or 83 percent of the HHS budget, as shown in figure 1. HHS relies extensively on computerized systems to support its mission critical operations and store the sensitive information it collects. It uses these systems to support the department’s financial and management functions, maintain sensitive employee personnel information, and process financial and medical data for millions of health care recipients. Its local and wide area networks interconnect these systems. In addition, HHS relies on contractor-owned systems to process departmental information and support its mission. For fiscal year 2005, HHS planned to spend nearly $5 billion on information technology—more than any other federal agency except the Department of Defense. A significant amount of these funds will be spent to facilitate the processing and payment of Medicare claims processed by CMS or its Medicare contractors. Information system controls are a critical consideration for any organization that depends on computerized systems and networks to carry out its mission or business. Without proper safeguards, there is risk that individuals and groups with malicious intent may intrude into inadequately protected systems and use this access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. In December 2002, Congress enacted the Federal Information Security Management Act of 2002 (FISMA) to strengthen security of information and information systems within federal agencies. FISMA requires each agency to develop, document, and implement an agencywide information security program to provide information security for the information and systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source. In addition, FISMA provides that the Secretary of HHS is responsible for, among other things, (1) providing information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of the agency’s information systems and information; (2) ensuring that senior agency officials provide information security for the information and information systems that support the operations and assets under their control; and (3) delegating to the agency CIO the authority to ensure compliance with the requirements imposed on the agency under the act. HHS’s CIO is responsible for developing, promoting, and coordinating the departmentwide information security program; developing, promulgating, and enforcing department information resource management policies, standards, and guidelines; and appointing the HHS chief information security officer. Each operating division, including CMS, is responsible for complying with the requirements of FISMA and departmentwide security-related policies, procedures, and standards; reporting on the effectiveness of its information security program; and ensuring that information systems operated by or on its behalf by contractors provide adequate risk-based security safeguards. HHS and CMS in particular have significant weaknesses in electronic access controls and other information system controls designed to protect the confidentiality, integrity, and availability of information and information systems. A key reason for these weaknesses is that the department has not yet fully implemented a departmentwide information security program. As a result, HHS’s medical and financial information systems are vulnerable to unauthorized access, use, modification, and destruction that could disrupt the department’s operations. A basic management objective for any organization is to protect the resources that support its critical operations from unauthorized access. Organizations accomplish this objective by designing and implementing electronic controls that are intended to prevent, limit, and detect unauthorized access to computing resources, programs, and information. Inadequate electronic access controls diminish the reliability of computerized information and increase the risk of unauthorized disclosure, modification, and destruction of sensitive information and disruption of service. Electronic access controls include those related to network management, user accounts and passwords, user rights and file permissions, and auditing and monitoring of security-related events. Our analysis of reports issued by the OIG and independent auditors disclosed that HHS did not consistently implement effective electronic access controls in each of these areas. Networks are collections of interconnected computer systems and devices that allow individuals to share resources such as computer programs and information. Because sensitive programs and information are stored on or transmitted along networks, effectively securing networks is essential to protecting computing resources and data from unauthorized access, manipulation, and use. Organizations secure their networks, in part, by installing and configuring network devices that permit authorized network service requests, deny unauthorized requests, and limit the services that are available on the network. Devices used to secure networks include (1) firewalls that prevent unauthorized access to the network, (2) routers that filter and forward data along the network, (3) switches that forward information among segments of a network, and (4) servers that host applications and data. Network services consist of protocols for transmitting data between network devices. Insecurely configured network services and devices, including those without current software patches, can make a system vulnerable to internal or external threats, such as denial-of-service attacks. Because networks often include both external and internal access points for electronic information assets, failure to adequately secure these access points increases the risk of unauthorized disclosure and modification of sensitive information or disruption of service. HHS policy requires that all incoming and outgoing connections from departmental systems and networks to the Internet, intranets, and extranets be made through a firewall and that effective technical controls be implemented to protect computing resources connected to the network. Our analysis found that HHS did not consistently configure network services and devices securely to prevent unauthorized access to and ensure the integrity of computer systems operating on its networks. The reports we reviewed identified weaknesses in the way that HHS operating divisions and contractors restricted network access, managed antivirus software, configured network devices, and protected information traversing the HHS networks. For example, System administrative access was not always adequately restricted, and unnecessary services were available on several network devices, increasing the risk that unauthorized individuals could gain access to the operating system. Antivirus software was not always installed or up-to-date on the operating divisions’ and contractors’ workstations, increasing the risk that viruses could infect HHS systems and potentially disable or disrupt system operations. Key network devices were not securely configured to prevent unauthorized individuals from gaining access to sensitive system configuration files and router access control lists. These weaknesses could allow an external attacker to circumvent network controls and thereby gain unauthorized access to the internal network. HHS did not encrypt certain information traversing its networks. Instead, it used clear text protocols that make network traffic susceptible to eavesdropping. HHS’s operating divisions and contractors did not consistently patch their computer systems and network devices in a timely manner. For example, the OIG reported that approximately 25 percent (287 of 1,129) of the systems tested at one operating division did not have up-to-date patches installed on them. Thirty of the machines tested were missing nine or more software patches that had been rated as critical by the vendor. At another operating division, over 90 high-risk software patch management vulnerabilities were outstanding from June 1999 through April 2005. Failure to keep system patches up-to-date could lead to denial-of-service attacks or to individuals gaining unauthorized access to network resources. According to the HHS chief information security officer, a patch management subcommittee was formed to address this issue and has formulated and published an approach to the department’s patch management problems. A computer system must be able to identify and differentiate among users so that activities on the system can be linked to specific individuals. When an organization assigns unique user accounts to specific users, the system is able to distinguish one user from another—a process called identification. The system must also establish the validity of a user’s claimed identity by requesting some kind of information, such as a password, that is known only by the user—a process known as authentication. The combination of identification and authentication— such as user account and password combinations—provides the basis for establishing individual accountability and for controlling access to the system. Accordingly, agencies (1) establish password parameters, such as number of characters, type of characters, and the frequency with which users should change their passwords, in order to strengthen the effectiveness of passwords for authenticating the identity of users; (2) require encryption for passwords to prevent their disclosure to unauthorized individuals; and (3) implement procedures to control the use of user accounts. HHS policy requires that all operating divisions implement and enforce logical password controls for all departmental systems and networks. Our analysis of reported weaknesses showed that HHS did not adequately control user accounts and passwords to ensure that only authorized individuals were granted access to its systems. For example, the department and its contractors did not always implement strong passwords—using vendor-default or easy to guess passwords. Additionally, One CMS Medicare contractor set passwords to never expire for 28 service accounts with powerful administrative privileges. As a result, an unauthorized individual could use a compromised user identification and password for an indefinite period to gain unauthorized access to server resources. Firewall administrators for another CMS Medicare contractor used a shared administrative account. As a result, the actions taken by these individuals cannot be traced back to the responsible individual. The minimum password length on one operating division’s local area network was set to zero. Consequently, users could create short passwords. Short passwords tend to be easier to guess or crack than longer passwords. In addition, passwords on this local area network were not required to be changed at initial logon. Such weaknesses increase the risk that passwords may be disclosed to unauthorized users and used to gain access to the system. They also diminish the effectiveness of these controls for attributing system activity to individuals. As a result, HHS may not be able to hold these users individually accountable for system activity. User Rights and File Permissions The concept of “least privilege” is a basic underlying principle for securing computer systems and data. It means that users are granted only those access privileges needed to perform their official duties. To restrict legitimate users’ access to only those programs and files that they need to do their work, organizations establish access rights and permissions. “User rights” are allowable actions that can be assigned to users or to groups of users. File and directory permissions are rules that are associated with a particular file or directory and regulate which users can access them and the extent of that access. To avoid unintentionally giving users unnecessary access to sensitive files and directories, an organization must give careful consideration to its assignment of rights and permissions. HHS policy requires that access privileges be granted to users at the minimum level required to perform their job-related duties. Our analysis of OIG reports showed that HHS granted access rights and permissions that gave some users more access to departmental information and medical systems than they needed to perform their jobs. For example, the following vulnerabilities were identified: All users could access world-readable start up scripts and files on several Medicare contractor systems. A malicious user could use this information to increase their system privileges. Members of the “Everyone” group were granted access to sensitive Windows directories, files, and registry settings, even though some did not have a legitimate business need for this access. Twenty-two groups or users without a legitimate need could access and update mainframe production data at one CMS Medicare contractor facility. Six of 15 employees reviewed at one operating division retained access privileges to the local area network after their separation from the department. Inappropriate access to sensitive files and directories provides opportunities for individuals to circumvent security controls to deliberately or inadvertently read, modify, or delete critical or sensitive information and computer programs. To establish individual accountability, monitor compliance with security policies, and investigate security violations, it is crucial to determine what, when, and by whom specific actions have been taken on a system. Organizations accomplish this by implementing system or security software that provides an audit trail that they can use to determine the source of a transaction or attempted transaction and to monitor users’ activities. The way in which organizations configure system or security software determines the nature and extent of information that can be provided by the audit trail. To be effective, organizations should configure their software to collect and maintain audit trails that are sufficient to track security-related events. HHS policy requires that audit logging be enabled for all departmental systems and networks so that security-related events—the manipulation, modification, or deletion of data—can be monitored and analyzed for unauthorized activity. HHS has not consistently audited and monitored security-related system activity on their systems. For example, the OIG reported that logging on some UNIX systems was either disabled or configured to overwrite these events, firewall and router logs were not routinely monitored, and procedures for classifying and investigating security–related events had not been documented at several HHS operating divisions and CMS Medicare contractors. As a result, if a system was modified or disrupted, the department’s ability to trace or recreate events could be diminished. In addition, these weaknesses could allow unauthorized access to go undetected. In response to weaknesses identified in electronic access controls, the HHS chief information security officer indicated that significant progress has been made in correcting these weaknesses and that preliminary results of fiscal year 2005 audits, by independent auditors, show a reduction in the number of weaknesses. In addition, the independent auditor of HHS’s financial statements for fiscal year 2005 reported that HHS had made significant progress in strengthening system controls, although it continued to identify general controls issues that represent significant deficiencies in the design and operation of electronic access controls. In addition to electronic access controls, other important controls should be in place to ensure the confidentiality, integrity, and availability of an organization’s information and systems. These controls include policies, procedures, and techniques to physically secure computer resources, conduct appropriate background investigations, provide sufficient segregation of duties, and prevent unauthorized changes to application software. Our analysis of reports issued by the OIG and independent auditors disclosed significant weaknesses in each of these areas. These weaknesses increase the risk that unauthorized individuals can gain access to HHS information systems and inadvertently or deliberately disclose, modify, or destroy the sensitive medical and financial data that the department relies on to deliver its vital services. Physical security controls are important for protecting computer facilities and resources from espionage, sabotage, damage, and theft. These controls restrict physical access to computer resources, usually by limiting access to the buildings and rooms in which the resources are housed and by periodically reviewing the access granted, in order to ensure that access continues to be appropriate. HHS policy requires that physical access to rooms, work areas and spaces, and facilities containing departmental systems, networks, and data be limited to authorized personnel; controls be in place for deterring, detecting, monitoring, restricting, and regulating access to sensitive areas at all times; and controls be commensurate with the level of risk and sufficient to safeguard these resources against possible loss, theft, destruction, accidental damage, hazardous conditions, fire, malicious actions, and natural disasters. Our analysis showed that HHS did not effectively implement physical controls as the following examples illustrate: One CMS Medicare contractor used a privately owned vehicle and an unlocked container to transport approximately 25,000 Medicare check payments over a 1-year period. Four hundred forty individuals were granted unrestricted access to an entire data center, including a sensitive area within the data center— although their jobs functions did not require them to have such access. Surveillance cameras used for monitoring a facility were not functioning, leading to blind spots in the data center’s perimeter security. Three individuals with access to an operating division’s data center did not have management approval for such access. These weaknesses in physical security increase the risk that unauthorized individuals could gain access to sensitive computing resources and data and inadvertently or deliberately misuse or destroy them. According to Office of Management and Budget (OMB) Circular A-130, it has long been recognized that the greatest harm to computing resources has been done by authorized individuals engaged in improper activities— whether intentionally or accidentally. Personnel security controls (such as screening individuals in positions of trust) are particularly important where the risk and magnitude of potential harm is high. The National Institute of Standards and Technology (NIST) guidelines suggest that agencies determine the sensitivity of particular positions, based on such factors as the type and degree of harm that the individual could cause by misusing the computer system and on more traditional factors, such as access to classified information and fiduciary responsibilities. Background investigations help an organization to determine whether a particular individual is suitable for a given position by attempting to ascertain the person’s trustworthiness and appropriateness for the position. The exact type of screening that takes place depends on the sensitivity of the position and any applicable regulations by which the agency is bound. HHS policy requires that all information security employees and contractor personnel be designated with position-sensitivity levels that are commensurate with the responsibilities and risks associated with their position. In addition, it requires suitability background investigations to be completed and favorably adjudicated for all personnel assigned to these positions prior to allowing them access to sensitive HHS systems and networks. Our analysis of prior reports showed that background investigations were not always performed. For example, 13 CMS Medicare contractors had weaknesses in their background investigation policies and procedures. Six of the contractors reviewed were not adhering to established policies, while the remaining seven were not performing background investigations in a consistent manner. In addition, one operating division was unable to provide the background investigation status for any of the 49 contractor personnel working at its data center or for any of the 28 contractor personnel supporting one of its general support systems. Additionally, background investigations at three operating divisions were considered inadequate because they were not performed at the appropriate sensitivity level. Granting people access to sensitive data without appropriate background investigations increases the risk that unsuitable individuals could gain access to sensitive information, use it inappropriately, or destroy it. Segregation of duties refers to the policies, procedures, and organizational structure that help ensure that no single individual can independently control all key aspects of a process or computer-related operation and thereby gain unauthorized access to assets or records. Often segregation of duties is achieved by dividing responsibilities among two or more individuals or organizational groups. This diminishes the likelihood that errors and wrongful acts will go undetected, because the activities of one individual or group will serve as a check on the activities of the other. Inadequate segregation of duties increases the risk that erroneous or fraudulent transactions could be processed, improper program changes be implemented, and computer resources could be damaged or destroyed. HHS policy requires operating divisions to ensure that responsibilities with a security impact be shared among multiple staff by enforcing the concept of separation of duties, which requires that individuals do not have control of the entirety of a critical process. Our analysis of OIG reports showed that HHS did not always sufficiently segregate computer functions. For example, some software developers had full access to both development and production software libraries. To illustrate, UNIX developers at one facility used a shared user account to promote development changes into the production environment. In another instance, two individuals with full access to development source code also had update capabilities to production libraries. Consequently, increased risk exists that these individuals could introduce software errors into production or perform unauthorized system activities without being detected. It is important to ensure that only authorized and fully tested application programs are placed into operation. To ensure that changes to application programs are necessary, work as intended, and do not result in the loss of data or program integrity, such changes should be documented, authorized, tested, and independently reviewed. In addition, test procedures should be established to ensure that only authorized changes are made to the application’s program code. HHS policy requires that operating divisions establish, implement, and enforce change management and configuration management controls on all departmental systems and networks that process, store, or communicate sensitive information. However, our analysis showed that HHS did not always document or control changes to application programs as the following examples demonstrate: Authorization forms did not exist for each of the 21 application control changes reviewed at one Medicare contractor facility. In addition, change control procedures were out-of-date and did not reflect current process and practice. Testing documentation at one operating division was not maintained for 4 of 15 change requests reviewed. Without adequately documented or controlled application change control procedures, changes may be implemented that are not authorized, tested, or approved. Further, the lack of adequate controls place HHS at greater risk that software supporting its missions will not produce reliable data or effectively meet its business needs. In response to weaknesses identified in other information security controls, the HHS chief information security officer indicated that significant progress has been made in correcting these weaknesses and that preliminary results of fiscal year 2005 audits, by independent auditors, show a reduction in the number of weaknesses. In addition, the independent auditor of HHS’s financial statements for fiscal year 2005 reported that HHS had made significant progress in strengthening system controls, although it continued to identify general controls issues that represent significant deficiencies in the design and operation of key controls such as physical access, system software, and application development and program change controls. A key reason for the information security weaknesses identified at HHS was that the department had not yet fully implemented its information security program. A departmentwide security program provides a framework and continuing cycle of activity for managing risk, developing security policies, assigning responsibilities, and monitoring the adequacy of the entity’s computer-related controls. Without such a program, security controls may be inadequate; responsibilities may be unclear, misunderstood, and improperly implemented; and controls may be inconsistently applied. Such conditions may lead to insufficient protection of sensitive or critical resources and disproportionately high expenditures for controls over low-risk resources. FISMA requires each agency to develop, document, and implement an information security program that includes the following key elements: periodic assessments of the risk and the magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information and information systems; policies and procedures that (1) are risk-based, (2) cost-effectively reduce risks, (3) ensure that information security is addressed throughout the life cycle of each system, and (4) ensure compliance with applicable requirements; plans for providing adequate information security for networks, facilities, and systems; security awareness training to inform personnel—including contractors and other users of information systems—of information security risks and of their responsibilities in complying with agency policies and procedures; at least annual testing and evaluation of the effectiveness of information security policies, procedures, and practices relating to management, operational, and technical controls of every information system identified in the agency’s inventory; a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies in its information security policies, procedures, or practices; procedures for detecting, reporting, and responding to security plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. FISMA also requires each agency to (1) annually report to OMB, selected congressional committees, and the Comptroller General on the adequacy of information security policies, procedures, and practices and compliance with requirements, and (2) its OIG or independent external auditor perform an independent annual evaluation of the agency’s information security program and practices. HHS has begun to implement the foundation for an effective information security program through its Secure One initiative by developing and documenting policies and procedures that designate implementation responsibilities. For example, HHS information security program provides baseline security policies and standards for the department. Operating divisions are required to comply with departmental standards or develop specific standards that exceed them. In addition, HHS uses an automated security management tool to collect, analyze, and report FISMA data. Similarly, CMS has made progress in developing and documenting its information security policies and procedures. Although HHS has made progress in developing and documenting a departmentwide information security program, it has not fully implemented the following key elements: risk assessments, policies and procedures, system security planning, security and awareness training, periodic testing and evaluation of controls, remedial action plans, incident handling, and continuity of operations. These weaknesses limit HHS’s ability to protect the confidentiality, integrity, and availability of its information and information systems. Identifying and assessing information security risks are essential to determining what controls are required. By increasing awareness of risks, these assessments can generate support for the policies and controls that are adopted. OMB Circular A-130, appendix III, prescribes that risk be reassessed when significant changes are made to computerized systems— or at least every 3 years, as does HHS policy. Consistent with NIST guidance, HHS requires that risk assessments characterize the system, identify information sensitivity and threats, determine the risk level of those threats and corresponding vulnerabilities, and analyze the potential business impact of exploited vulnerabilities. HHS’s performance in conducting risk assessments has varied across the department. Our review of 10 CMS risk assessments found that they generally complied with applicable federal and departmental guidance. By contrast, two of the three Office of the Secretary risk assessments reviewed did not fully address key elements. For example, the risk assessments did not identify threat sources, threat actions, or risk levels, as described in NIST SP 800-30. Nor did they detail whether or not a business impact analysis had been completed. HHS’s OIG also identified weaknesses in the department’s risk assessments. In its 2005 FISMA evaluation, the OIG reported that risk assessments had not been performed on two major systems—one at the Administration for Children and Families, and one at the Administration on Aging. In response to these weaknesses identified in the department’s information security program, the HHS chief information security officer stated that risk assessments are currently being tracked using the department’s FISMA data management tool, which compiles information security management data for monitoring and review. All operating divisions are required to enter their FISMA data into this automated tool so that it can be reviewed and validated by the Secure One program staff. The combination of this tool and feedback from the Secure One program is designed to improve the completion rate and quality of risk assessments. The lack of or incomplete risk assessments could result in HHS’s systems having inadequate or inappropriate security controls that might not address those systems’ true risk, and result in costly efforts to subsequently implement effective controls. Another key task in implementing an effective information security program is to develop and document risk-based policies, procedures, and technical standards that govern security over an agency’s computing environment. If properly implemented, policies and procedures should help to cost-effectively reduce the risk of unauthorized access, modification, and destruction of information and systems. Technical security standards should provide consistent implementing guidance for each computing environment. Because security policies are the primary mechanism by which management communicates its views and requirements, it is important to develop and document them. FISMA requires each agency to develop minimally acceptable system configuration requirements and ensure compliance with them. Systems with secure configurations have less vulnerabilities and are better able to thwart network attacks. HHS has not developed departmentwide policies regarding minimally acceptable configuration requirements. According to HHS’s chief information security officer, HHS has neither developed nor documented such configuration requirements for its operating systems. The OIG reported in its fiscal year 2005 FISMA evaluation that these requirements were being maintained at the operating division level. In addition, the OIG found that three of the six operating divisions had not implemented minimum acceptable configuration requirements for their operating systems. Without departmentwide policies for developing minimally acceptable configuration requirements for its information systems, HHS may not be able to cost-effectively reduce information security risks to an acceptable level. The objective of system security planning is to improve the protection of information technology resources. A system security plan is to provide a complete and up-to-date overview of the system’s security requirements and describe the controls that are in place or planned to meet those requirements. FISMA requires that agency information security programs include subordinate plans for providing adequate information security for networks, facilities, and systems or groups of information systems, as appropriate. OMB Circular A-130 specifies that agencies develop and implement system security plans for major applications and for general support systems and that these plans address policies and procedures for providing management, operational, and technical controls. According to NIST, security plans should include existing or planned security controls, the individual responsible for the security of the system, a description of the system and its interconnected environment, and rules of behavior. HHS policy requires all of its operating divisions to develop and document system security plans for all departmental systems and networks in accordance with NIST guidance and to update such plans at least once every 3 years or when significant changes occur to the system. Our review found that HHS and CMS system security plans generally complied with applicable federal and departmental guidance. We examined seven plans and determined that they were up-to-date, addressed existing controls, identified responsible security personnel, described the system and its interconnections, and included rules of behavior. However, our analysis of OIG reports found that security plans had not been completed for two major systems—one at the Administration for Children and Families, and one at the Administration on Aging. Until its operating divisions complete security plans for all systems, HHS cannot ensure that appropriate controls are in place to protect its systems and critical information. Awareness and Security Training Computer intrusions and security breakdowns often occur because computer users fail to take appropriate security measures. For this reason, it is vital that employees and contractors who use computer resources in their day-to-day operations be made aware of the importance and sensitivity of the information they handle, as well as the business and legal reasons for maintaining its confidentiality, integrity, and availability. FISMA requires that an information security program promote awareness and provide training for users (federal employees and contractors) so that they can understand the system security risks and their role in implementing related policies and controls to mitigate those risks. HHS policy requires the establishment of an annual security awareness training program for all employees and contractors. In the event that a security breach occurs, amply trained security personnel are vital to a timely and appropriate response. Depending on an employee’s specific security role, specialized training could include training in incident detection response, physical security, or firewall configuration. FISMA requires agency chief information officers to ensure that personnel with significant information security responsibilities receive specialized security training. HHS policy also require specialized security education and awareness training for all individuals with significant security responsibilities. Although the department has made progress in security awareness training, the department had not provided adequate security training to employees with significant security related responsibilities. In fiscal year 2005, HHS reported that 98 percent of its employees, including contractors, had received security awareness training. However, it reported that 32 percent of its employees with significant security related responsibilities had not received specialized security training. Conversely, CMS reported that 100 percent of its employees with significant security related responsibilities had received such training. Without sufficiently trained security personnel, security lapses are more likely to occur and could contribute to information security weaknesses at HHS. Another key element of an information security program is testing and evaluating system controls to ensure that they are appropriate, effective, and comply with policies. An effective program of ongoing tests and evaluations can be used to identify and correct information security weaknesses. This type of oversight demonstrates management’s commitment to the security program, reminds employees of their roles and responsibilities, and identifies and mitigates areas of noncompliance and ineffectiveness. Although control tests may encourage compliance with security policies, the full benefits of testing are not achieved unless the test results are analyzed by security specialists and business managers and used as a means of identifying new problem areas, reassessing the appropriateness of existing controls, and identifying the need for new controls. FISMA requires that agencies test and evaluate the information security controls of their systems, and that the frequency of such tests be based on risk, but occur no less than annually. HHS requires systems and networks that contain sensitive or mission critical information to undergo vulnerability scanning and/or penetration testing to identify security threats at least annually or when significant changes are made to the system or network. HHS also requires that a self-assessment be conducted of all departmental systems and networks at least annually in accordance with NIST SP 800-26. Consistent with FISMA provisions and HHS guidance, CMS policy also requires periodic testing and evaluation of its information systems’ security controls. Although HHS has initiatives under way to improve its testing and evaluation of controls, it has not fully implemented an ongoing program of tests and evaluations. Our analysis of the OIG’s fiscal year 2005 FISMA report found that several operating divisions had not tested and evaluated security controls for all their systems. For example, three systems at three different operating divisions had not undergone system testing and evaluation. At another operating division, system tests and evaluations for three of its six major applications had not been completed. Without comprehensive tests and evaluations of security controls, HHS cannot be assured that employees and contractors are complying with established policies or those policies and controls are appropriate and working as intended. Remedial action plans, also known as plans of actions and milestones, can assist agencies in identifying, assessing, prioritizing, and monitoring progress in correcting security weaknesses in information systems. According to OMB Circular A-123, agencies should take timely and effective action to correct deficiencies that they have identified through a variety of information sources. To accomplish this, remedial action plans should be developed for each deficiency, and progress should be tracked for each. In compliance with OMB policy, HHS requires the capture of all information security program and system control weaknesses that require mitigation in remedial action plans. In addition, HHS has provided information security managers and system owners guidance for developing, maintaining, and reporting their remedial action plans. Our review of OIG reports on selected operating divisions identified shortcomings in the HHS remedial action process. For example, the remedial action plans for three operating divisions did not include weaknesses previously identified in the operating divisions’ risk assessments, OIG audits, or other independent audits. Moreover, the remedial action plans for four operating divisions contained overdue corrective action items and lacked key corrective action information, such as the risk level assigned to weaknesses, resources needed to remedy the weaknesses, and adequate support to demonstrate closed weaknesses. Our review of CMS remedial action plans yielded similar results. Specifically, we found 20 percent of the corrective actions did not identify the resources needed to correct those weaknesses. Without a sound remediation process, HHS cannot be assured that weaknesses in its information security program will be efficiently and effectively corrected. Even strong controls may not block all intrusions and misuse, but organizations can reduce the risks associated with such events if they take steps to promptly detect and respond to them before significant damage is done. In addition, analyzing security incidents allows organizations to gain a better understanding of the threats to their information and the costs of their security-related problems. Such analyses can pinpoint vulnerabilities that need to be eliminated so that they will not be exploited again. Incident reports can be used to provide valuable input for risk assessments, help in prioritizing security improvement efforts, and illustrate risks and related trends for senior management. FISMA requires that agency information security programs include procedures for detecting and reporting security incidents. To ensure effective handling of incidents, HHS policy requires the establishment and maintenance of an incident response capability that includes preparation, identification, containment, eradication, recovery, and follow-up capabilities. HHS operating divisions did not always employ adequate incident detection capabilities. Our analysis of OIG reports found, for example, that 13 CMS Medicare contractors had weaknesses in their intrusion detection policies and procedures. Five of the contractors did not have intrusion detection systems in place, while six were cited for either not reporting incidents in accordance with FISMA guidance or not reporting incidents to CMS. The remaining two contractors exhibited weaknesses in their incident monitoring process and procedures. Finally, one operating division used router and firewall logs for troubleshooting instead of for intrusion detection. The wide disparity in the reporting of security incidents and events at HHS and its operating divisions also raises concern. For example, the Food and Drug Administration reported over 16 million events while the Centers for Medicare & Medicaid Services and the Centers for Disease Control and Prevention combined reported less than 1,600, as indicated in table 1. HHS operating divisions collectively reported over 18 million events during September 2005 but less than 10 incidents. We did not attempt to assess the accuracy of the reported events and incidents. However, the disparity in the number of reported events among the operating divisions of relatively similar size raises concerns. This disparity may be an indication of inconsistency among criteria settings and configuration requirements for the respective intrusion detection systems. The reporting disparities may also be influenced by the type and location of the intrusion detection systems. For example, an intrusion detection system located behind a firewall detects fewer events than one located on the perimeter in front of a firewall because of the firewall’s ability to block certain network traffic. Intrusion detection systems’ visibility to the Internet also increases the potential exposure to security events. Without consistent detection and reporting, HHS cannot be assured that it is handling incidents in an effective manner. Continuity of operations controls can enable systems to be recovered quickly and effectively following a service disruption or disaster. Such controls include plans and procedures designed to protect information resources and minimize the risk of unplanned interruptions, along with a plan to recover critical operations should interruptions occur. These controls should be designed to ensure that when unexpected events occur, key operations continue without interruption or are promptly resumed, and critical and sensitive data are protected. They should also be tested annually or as significant changes are made. It is important that these plans be clearly documented, communicated to potentially affected staff, and updated to reflect current operations. Consistent with federal guidance, HHS policy requires operating divisions to identify, prioritize, and document disaster recovery planning requirements for all critical departmental systems, networks, data, and facilities. CMS’s information security policy complies with the departmentwide policy. CMS’s Information Security Handbook provides additional guidance as to what key elements should be included in contingency plans. These elements are further detailed in its guidance to CMS contractors. HHS has various efforts underway to address continuity of operations. In its fiscal year 2005 FISMA report, the OIG noted the elimination of the department’s significant deficiency relating to contingency planning and disaster recovery. However, shortcomings in continuity of operations still exist. In its FISMA report to OMB for fiscal year 2005, HHS reported that 19.2 percent of its FISMA inventoried systems (34 out of 177) did not have tested contingency plans. Furthermore, the OIG also identified deficiencies in continuity of operations plans developed at HHS’s operating divisions. For example, contingency plans for four major applications at one operating division were not application specific, but were actually the same plan originally developed for the server recovery; contingency plans did not exist for the local area networks of four another operating division did not prioritize the recovery of its systems in the divisionwide contingency plan; and inadequate documentation existed to determine whether testing had been performed for one of another division’s contingency plans. As a result of these weaknesses, the department has limited assurance that operating divisions will be able to protect critical and sensitive information and information systems and resume operations promptly when unexpected events or unplanned interruptions occur. If continuity of operations controls are inadequate, even a relatively minor interruption could result in significant adverse impact on HHS operating divisions’ ability to recover and resume operations. Given the size and significance of HHS’s information technology investments, and the sensitivity of the medical, personal, and financial data it maintains through these investments, it is imperative that the department develops strong information security controls and implements a comprehensive information security program. While HHS has made progress toward developing and documenting a departmentwide information security program, significant weaknesses in information security controls could lead to the unauthorized disclosure, modification, or destruction of the sensitive data that HHS relies on to accomplish its vital mission. A key reason for these weaknesses is that HHS has not yet fully implemented a departmentwide information security program that can establish and maintain effective controls. Full implementation of such a program would provide for periodically assessing risks, establishing appropriate policies and procedures, developing and implementing security plans, promoting security awareness training, testing and evaluating the effectiveness of controls, implementing corrective actions, responding to incidents, and ensuring continuity of operations. Implementing such a program across all operating divisions requires effective management oversight and monitoring, especially at a department as diverse as HHS. Until HHS strengthens information security controls and fully implements its information security program, it will have limited assurance that its operations and assets are adequately protected. To help HHS fully implement its departmentwide information security program, we recommend that the Secretary of HHS direct the Chief Information Officer to develop and implement policies and procedures to ensure the establishment of minimum acceptable configuration requirements. In addition, we recommend that the Secretary direct the Chief Information Officer to take the following seven steps to ensure that operating divisions develop comprehensive risk assessments that address key elements; complete system security plans for all systems; provide specialized training to all individuals with significant security conduct tests and evaluations of the effectiveness of controls on operational systems, and document results; review remedial action plans to ensure that they address all previously identified weaknesses and key corrective action information; implement intrusion detection systems and configure them to use consistent criteria for the detection and reporting of security incidents and events; and develop and test continuity of operations plans for all of their systems. The Department of Health and Human Services’s Inspector General transmitted the department’s written comments on a draft of this report (reprinted in app. II). In these comments, HHS supported our emphasis on improvements needed in key information security program elements, but stated that our report did not appropriately reflect the progress that the department has made in addressing information security. Specifically, HHS expressed concerns that our evaluation approach did not provide an accurate or complete appraisal of the department’s information security program, in that the report does not mention the department’s defense-in-depth strategy or accomplishment of two major goals—the department’s campaign to mitigate its deficiency pertaining to contingency planning and reduce its number of reportable conditions by 25 percent. According to HHS, it employs a defense-in-depth strategy to ensure threats are effectively addressed and mitigated. We acknowledge HHS’s statement on its defense-in-depth strategy, but note that the significant control weaknesses identified in this report and by independent auditors indicate that this strategy is not fully working as intended. With regard to the two major goals, we have revised the report to reflect the elimination of the contingency planning deficiency. Regarding the department’s reduction in the number of reportable conditions, in its report on internal controls, the OIG’s independent auditor reported progress made in strengthening security controls; however, it still reported weaknesses in several information security areas, including the entitywide security program, access controls, application development and program change controls, system software, and service continuity. HHS also noted that our report did not mention recent improvements or progress made in information security until a brief statement in the conclusion of the report, and that the report was predicated on findings originally documented by the HHS OIG in fiscal year 2005. However, throughout the report we acknowledge HHS’s improvements and progress made in correcting information security weaknesses and have added additional statements based on these comments. In addition, as noted in our scope and methodology, our evaluation included the most recent reports issued at the time of our review. In its comments, HHS also expressed concern over our use of the word “significant” to describe the reported weaknesses. In their most recent report on internal controls, the OIG’s independent auditor reported information security as a “reportable condition” at the department. The auditors concluded that “the cumulative effect of these weaknesses represents significant deficiencies in the overall design and operation of internal controls.” Based on the findings in our report, the definition of “reportable condition,” and the comments of the independent auditors, we believe the use of the word “significant” is appropriate to describe these weaknesses. HHS also took exception to our conclusion that it had not fully implemented a departmentwide information security program, and stated that our findings instead indicate that the full integration or maturity of the program has not been achieved. FISMA requires that agencies develop, document, and implement an information security program. As stated in our report, we acknowledged that HHS has made progress in developing and documenting its program. However, elements of the program have not been fully or consistently implemented. For example, three systems at three different operating divisions had not undergone system testing and evaluation. As a result, we believe that the use of the phrase “not fully implemented” is appropriate for describing HHS’s shortcomings in its information security program. Additionally, the department stated that our assessment of its security program was based on a small percentage of HHS systems. However, as noted in our scope and methodology, we selected applications and general support systems because they support HHS’s departmentwide financial reporting and communications, or Medicare payment and communication functions at CMS and its contractors—operations that are critical to the department. These included the Medicare Claims Processing Systems that processed over one billion claims and $294 billion in claims payments in 2004; the CMS Communication Network that provides connectivity between CMS and its business-related entities; and the HHS Enterprise Services Network that provides a shared network backbone for several HHS operating divisions. The department also noted that our statement that HHS had not developed departmentwide policies regarding minimally acceptable configuration requirements was inaccurate. In its comments, HHS states that “plans are in place” to standardize implementation in fiscal year 2006 and that the divisional chief information security officers formed a subcommittee to develop configuration standards. Although these are positive efforts, we believe that such statements support our conclusion that such policies have not yet been developed. In addition, the department noted that we did not acknowledge progress made relating to contingency planning. HHS stated that it had completed and tested contingency plans for 100 percent of its high-risk FISMA systems. However, the HHS OIG did not concur with this statement, reporting that one of the seven high-risk systems that they evaluated did not have tested contingency plans. As mentioned previously, the department also stated that we did not acknowledge the elimination of their sole existing significant deficiency relating to contingency planning and disaster recovery. We have revised the report to reflect the elimination of this deficiency. Finally, the department noted additional improvements specific to CMS that were not included in our report. The department cited the elimination of a long standing CMS material weakness in Medicare electronic access controls. However, this material weakness was downgraded to a reportable condition, indicating that significant deficiencies still exist. The department also stated that we did not acknowledge significant progress in FISMA compliance made by its fiscal intermediaries and carriers and that they provided these results to the HHS OIG in early December 2005. However, these reports were not available for release to us at that time. Additionally, the department stated that we did not acknowledge CMS’s significant achievements in meeting it statutory responsibilities under FISMA, as reported by the HHS OIG. We acknowledge in the report that HHS, which includes CMS, has begun to implement the foundation for an effective information security program. While the HHS OIG FISMA report cited some achievements made by CMS, the HHS OIG also noted 28 exceptions in the CMS information security program. HHS also provided specific technical comments, which we have incorporated, as appropriate, in the report. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time we will send copies of this report to the Secretary of Health and Human Services. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-6244 or by e-mail at wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. The objective of our review was to assess the effectiveness of the HHS information security program, particularly at CMS, in protecting the confidentiality, integrity, and availability of its information and information systems. To accomplish this objective, we evaluated the effectiveness of HHS’s information security controls, and whether HHS had developed, documented, and implemented a departmentwide information security program consistent with federal laws and policies. To evaluate the effectiveness of HHS’s information security controls, we examined 74 management and audit reports pertaining to information security practices and controls at 13 operating divisions issued by the department, its Office of the Inspector General (OIG), and independent auditors during 2004 and 2005. These reports identified information security control weaknesses at HHS, the operating divisions, and contractor-owned facilities, which we then classified according to the general control categories specified in our Federal Information System Controls Audit Manual (FISCAM). Further, these reports contained specific recommendations to the department to remedy identified information security control weaknesses. To evaluate whether HHS had developed and documented a departmentwide information security program consistent with federal laws and policies, we examined related documents, such as policies and procedures, handbooks, various types of security-related reports, and HHS’s information systems inventory. We assessed whether its program was consistent with the requirements of FISMA, as well as applicable Office of Management and Budget policies and National Institute of Standards and Technology guidance related to risk assessments, risk-based policies and procedures, information security plans, security awareness training, testing and evaluating security controls, remedial action plans, handling security incidents, and continuity of operations for information systems. We also held discussions with CMS and contractor officials responsible for information security management and with the HHS Inspector General staff regarding any related prior, ongoing, or planned work in these areas. To evaluate whether HHS had implemented an information security program consistent with federal laws and policies, we focused our review on CMS—the operating division with the largest budget in the department—as well as the Office of the Secretary, an operating division with a departmentwide perspective. We compared their documented practices and controls to the departmentwide information security program as well as applicable FISMA requirements, OMB policy, and NIST guidance. To determine how well the operating divisions were implementing their own policies and procedures, we evaluated available risk assessments, security plans, security and awareness training, system tests and evaluations, remedial actions, and continuity of operations for the following major applications and general support systems: Automated Financial Statement System—a system to collect operating divisions’ financial statement data to generate the departmentwide year- end and quarterly statements. Information Collection Review and Approval System—a web-based database application used by HHS, the Securities and Exchange Commission and OMB to help federal agencies electronically administer and manage its information collection clearance responsibilities under the Paperwork Reduction Act. HHS’s Enterprise Services Network—the enterprise network for the department. It is comprised of a combination of very high performance network services provided by a public communications carrier. Medicare Claims Processing Systems—a CMS contractor operated group of systems used to process Medicare claims—including inpatient hospital care, nursing facilities, home health care, and other health care services. CMS communications network—a private network that provides connectivity between CMS and its business-related entities that provide Medicare services. We selected these applications and systems because they support either (1) HHS’s enterprisewide financial reporting and communication functions, or (2) CMS’s and its contractors’ Medicare payments and communication functions. We performed our work at HHS headquarters in Washington, D.C., and the CMS Central Office, located in Baltimore, Maryland. This review was performed from June through December 2005 in accordance with generally accepted government auditing standards. Administration for Children and Families—responsible for some 60 programs that promote the economic and social well being of children, families and communities. Administration on Aging—supports a nationwide network providing services to the elderly, especially to enable them to remain independent. Agency for Healthcare Research and Quality—supports research on health care systems, health care quality and cost issues, access to health care, and effectiveness of medical treatments. It provides evidence-based information on health care outcomes and quality of care. Agency for Toxic Substances and Disease Registry—responsible for preventing exposure to hazardous substances from waste sites on the U.S. Environmental Protection Agency’s National Priorities List and develops toxicological profiles of chemicals at these sites. Centers for Disease Control and Prevention—provides a system of health surveillance to monitor and prevent disease outbreaks, implements disease prevention strategies, and maintains national health statistics. The centers also provide for immunization services, workplace safety, and environmental disease prevention. In addition, the centers guard against international disease transmission, with personnel stationed in more than 25 foreign countries. Centers for Medicare & Medicaid Services—administers the Medicare and Medicaid programs, which provide health care to about one in every four Americans. Medicare provides health insurance for more than 42.1 million elderly and disabled Americans. Medicaid, a joint federal-state program, provides health coverage for some 44.7 million low-income persons, including 21.9 million children, and nursing home coverage for low-income elderly. CMS also administers the State Children’s Health Insurance Program that covers more than 4.2 million children. Food and Drug Administration—responsible for assuring the safety of foods and cosmetics, and the safety and efficacy of pharmaceuticals, biological products, and medical devices—products that represent almost 25 cents of every dollar in U.S. consumer spending. Health Resources and Services Administration—provides access to essential health care services for people who are low-income, uninsured or who live in rural areas or urban neighborhoods where health care is scarce. The agency helps prepare the nation’s health care system and providers to respond to bioterrorism and other public health emergencies, maintains the National Health Service Corps, and helps build the health care workforce through training and education programs. Indian Health Service—provides health services to 1.6 million American Indians and Alaska Natives of more than 550 federally recognized tribes. The Indian health system includes 49 hospitals, 247 health centers, 348 health stations, satellite clinics, residential substance abuse treatment centers, Alaska Native village clinics, and 34 urban Indian health programs. National Institutes of Health—a medical research organization, supporting over 38,000 research projects nationwide in diseases including cancer, Alzheimer’s, diabetes, arthritis, heart ailments, and AIDS. Office of Inspector General—The OIG is responsible for protecting the integrity of HHS programs, as well as the health and welfare of the beneficiaries of those programs. It is also responsible for reporting program and management problems and recommendations to correct them to both the Secretary of HHS and to Congress. The OIG's duties are carried out through a nationwide network of audits, investigations, inspections, and other mission-related functions performed by OIG components. Office of the Secretary—provides counsel to the secretary on such issues as public affairs, legislation, budget, technology, and finance. Program Support Center—The Program Support Center was created in 1995 to provide a wide range of administrative support within the Department of Health and Human Services, allowing the department operating divisions to concentrate on their core functional and operational objectives. Substance Abuse and Mental Health Services Administration— works to improve the quality and availability of substance abuse prevention, addiction treatment, and mental health services. In addition to the person named above, Idris Adjerid, Larry Crosland, Jeffrey Knott, Carol Langelier, Ronald Parker, Amos Tevelow, and William Thompson made key contributions to this report.
The Department of Health and Human Services (HHS) is the nation's largest health insurer and the largest grant-making agency in the federal government. HHS programs impact all Americans, whether through direct services, scientific advances, or information that helps them choose medical care, medicine, or even food. For example, the Centers for Medicare & Medicaid Services (CMS), a major operating division within HHS, is responsible for the Medicare and Medicaid programs that provide care to about one in every four Americans. In carrying out their responsibilities, both HHS and CMS rely extensively on networked information systems containing sensitive medical and financial information. GAO was asked to assess the effectiveness of HHS's information security program, with emphasis on CMS, in protecting the confidentiality, integrity, and availability of its information and information systems. HHS and CMS have significant weaknesses in controls designed to protect the confidentiality, integrity, and availability of their sensitive information and information systems. HHS computer networks and systems have numerous electronic access control vulnerabilities related to network management, user accounts and passwords, user rights and file permissions, and auditing and monitoring of security-related events. In addition, weaknesses exist in other types of controls designed to physically secure computer resources, conduct suitable background investigations, segregate duties appropriately, and prevent unauthorized changes to application software. All of these weaknesses increase the risk that unauthorized individuals can gain access to HHS information systems and inadvertently or deliberately disclose, modify, or destroy the sensitive data that the department relies on to deliver its vital services. A key reason for these control weaknesses is that the department has not yet fully implemented a departmentwide information security program. While HHS has laid the foundation for such a program by developing and documenting policies and procedures, the department has not yet fully implemented key elements of its information security program at all of its operating divisions. Specifically, HHS and its operating divisions have not fully implemented elements related to (1) risk assessments, (2) policies and procedures, (3) security plans, (4) security awareness and training, (5) tests and evaluations of control effectiveness, (6) remedial actions, (7) incident handling, and (8) continuity of operations plans. Until HHS fully implements a comprehensive information security program, security controls may remain inadequate; responsibilities may be unclear, misunderstood, and improperly implemented; and controls may be inconsistently applied. Such conditions may lead to insufficient protection of sensitive or critical resources and disproportionately high expenditures for controls over low-risk resources.
Since the 1960s, geostationary and polar-orbiting environmental satellites have been used by the United States to provide meteorological data for weather observation, research, and forecasting. NOAA’s National Environmental Satellite Data and Information Service (NESDIS) is responsible for managing the civilian geostationary and polar-orbiting satellite systems as two separate programs, called GOES and the Polar Operational Environmental Satellites, respectively. Unlike polar-orbiting satellites, which constantly circle the earth in a relatively low polar orbit, geostationary satellites can maintain a constant view of the earth from a high orbit of about 22,300 miles in space. NOAA operates GOES as a two-satellite system that is primarily focused on the United States (see fig. 1). These satellites are uniquely positioned to provide timely environmental data to meteorologists and their audiences on the earth’s atmosphere, its surface, cloud cover, and the space environment. They also observe the development of hazardous weather, such as hurricanes and severe thunderstorms, and track their movement and intensity to reduce or avoid major losses of property and life. Furthermore, the satellites’ ability to provide broad, continuously updated coverage of atmospheric conditions over land and oceans is important to NOAA’s weather forecasting operations. To provide continuous satellite coverage, NOAA acquires several satellites at a time as part of a series and launches new satellites every few years (see table 1). Three satellites—GOES-11, GOES-12, and GOES-13—are currently in orbit. Both GOES-11 and GOES-12 are operational satellites, while GOES-13 is in an on-orbit storage mode. It is a backup for the other two satellites should they experience any degradation in service. The others in the series, GOES-O and GOES-P, are planned for launch over the next few years. NOAA is also planning a future generation of satellites, known as the GOES-R series, which are planned for launch beginning in 2012. Each of the operational geostationary satellites continuously transmits raw environmental data to NOAA ground stations. The data are processed at these ground stations and transmitted back to the satellite for broadcast to primary weather services both in the United States and around the world, including the global research community. Raw and processed data are also distributed to users via ground stations through other communication channels, such as dedicated private communication lines and the Internet. Figure 2 depicts a generic data relay pattern from the geostationary satellites to the ground stations and commercial terminals. NOAA is planning for the GOES-R program to improve on the technology of prior GOES series, in terms of both system and instrument improvements. The system improvements are expected to fulfill more demanding user requirements and to provide more rapid information updates. Table 2 highlights key system-related improvements GOES-R is expected to make to the geostationary satellite program. The instruments on the GOES-R series are expected to increase the clarity and precision of the observed environmental data. NOAA plans to acquire five different types of instruments. The program office considered two of the instruments—the Advanced Baseline Imager and the Hyperspectral Environmental Suite—to be most critical because they would provide data for key weather products. Table 3 summarizes the originally planned instruments and their expected capabilities. The program management structure for the GOES-R program differs from past GOES programs. Prior to the GOES-R series, NOAA was responsible for program funding, procurement of the ground elements, and on-orbit operation of the satellites, while NASA was responsible for the procurement of the spacecraft, instruments, and launch services. NOAA officials stated that this approach limited the agency’s insight and management involvement in the procurement of major elements of the system. Alternatively, under the GOES-R management structure, NOAA has responsibility for the procurement and operation of the overall system—including spacecraft, instruments, and launch services. NASA is responsible for the procurement of the individual instruments until they are transferred to the overall GOES-R system contractor for completion and integration onto the spacecraft. Additionally, to take advantage of NASA’s acquisition experience and technical expertise, NOAA located the GOES-R program office at NASA’s Goddard Space Flight Center. It also designated key program management positions to be filled with NASA personnel. These positions include the deputy system program director role for advanced instrument and technology infusion, the project manager for the flight portion of the system, and the deputy project manager for the ground and operations portion of the system. NOAA officials explained that they changed the management structure for the GOES-R program in order to streamline oversight and fiduciary responsibilities, but that they still plan to rely on NASA’s expertise in space system acquisitions. Satellite programs are often technically complex and risky undertakings, and as a result, they often experience technical problems, cost overruns, and schedule delays. We and others have reported on a historical pattern of repeated missteps in the procurement of major satellite systems, including the National Polar-orbiting Operational Environmental Satellite System (NPOESS), the GOES I-M series, the Space Based Infrared System High Program (SBIRS-High), and the Advanced Extremely High Frequency Satellite System (AEHF). Table 4 lists key problems experienced with these programs. At the time of our review, NOAA was nearing the end of the preliminary design phase on its GOES-R program and planned to award a contract for the system’s development in August 2007. However, because of concerns with potential cost growth, NOAA’s plans for the GOES-R procurement are changing. To date, NOAA has issued contracts for the preliminary design of the overall GOES-R system to three vendors and expects to award a contract to one of these vendors to develop the system. In addition, to reduce the risks associated with developing new instruments, NASA has issued contracts for the early development of two instruments and for the preliminary designs of three other instruments. The agency plans to award these contracts and then turn them over to the contractor responsible for the overall GOES-R program. However, this approach is under review and NOAA may wait until the instruments are fully developed before turning them over to the system contractor. Table 5 provides a summary of the status of contracts for the GOES-R program. According to program documentation provided to the Office of Management and Budget in 2005, the official life cycle cost estimate for GOES-R was approximately $6.2 billion (see table 6). However, program officials reported that this estimate was over 2 years old and under review. At the time of our review, NOAA was planning to launch the first GOES-R series satellite in September 2012. The development of the schedule for launching the satellites was driven by a requirement that the satellites be available to back up the last remaining GOES satellites (GOES-O and GOES-P) should anything go wrong during the planned launches of these satellites. Table 7 provides a summary of the planned launch schedule for the originally planned GOES-R series. However, NOAA’s plans for the GOES-R procurement are changing because of concerns with potential cost growth. Given its experiences with cost growth on the NPOESS acquisition, NOAA asked program officials to recalculate the total cost of the estimated $6.2 billion GOES-R program. In May 2006, program officials estimated that the life cycle cost could reach $11.4 billion. The agency then requested that the program identify options for reducing the scope of requirements for the satellite series. Program officials reported that there were over 10 viable options under consideration, including options for removing one or more of the planned instruments. The program office also reevaluated its planned acquisition schedule based on the potential program options. Specifically, program officials stated that if there was a decision to make a major change in system requirements, they would likely extend the preliminary design phase, delay the decision to proceed into the development and production phase, and delay the contract award date. At the time of our review, NOAA officials estimated that a decision on the future scope and direction of the program could be made by the end of September 2006. In mid-September 2006, NOAA officials reported that a decision on the future scope and direction of GOES-R had been made—and involved a reduction in the number of satellites and in planned program capabilities, a revised life cycle cost estimate, and the delay of key programmatic milestones. Specifically, NOAA reduced the minimum number of satellites to two. In addition, plans for developing the Hyperspectral Environmental Suite—which was once considered a critical instrument by the agency—were cancelled. Instead, the program office is exploring options that will ensure continuity of sounding data currently provided by the current GOES series. NOAA officials reported that the cost of the restructured program is not known, but some anticipate it will be close to the original program estimate of $6.2 billion. The contract award for the GOES-R system has been pushed out to May 2008. Finally, the planned launch date of the first satellite in the GOES-R series has been delayed until December 2014. NOAA has taken steps to apply lessons learned from problems encountered on other satellite programs to the GOES-R procurement. Key lessons include (1) establishing realistic cost and schedule estimates, (2) ensuring sufficient technical readiness of the system’s components prior to key decisions, (3) providing sufficient management at government and contractor levels, and (4) performing adequate senior executive oversight to ensure mission success. NOAA has established plans designed to mitigate the problems faced in past acquisitions; however, many activities remain to fully address these lessons. Until it completes these activities, NOAA faces an increased risk that the GOES-R program will repeat the increased cost, schedule delays, and performance shortfalls that have plagued past procurements. We and others have reported that space system acquisitions are strongly biased to produce unrealistically low cost and schedule estimates in the acquisition process. Our past work on military space acquisitions has indicated that during program formulation, the competition to win funding is intense and has led program sponsors to minimize their program cost estimates. NOAA programs face similar unrealistic estimates. For example, the total development cost of the GOES I-M acquisition was over three times greater than planned, escalating from $640 million to $2 billion. Additionally, the delivery of the first satellite was delayed by 5 years. NOAA has several efforts under way to improve the reliability of its cost and schedule estimates for the GOES-R program. NOAA’s Chief Financial Officer has contracted with a cost-estimating firm to complete an independent cost estimate, while the GOES-R program office has hired a support contractor to assist with its internal program cost estimating. The program office is re-assessing its estimates based on preliminary information from the three vendors contracted to develop preliminary designs for the overall GOES-R system. Once the program office and independent cost estimates are completed, program officials intend to compare them and to develop a revised programmatic cost estimate that will be used in its decision on whether to proceed into system development and production. In addition, NOAA has planned for an independent review team—consisting of former senior industry and government space acquisition experts—to provide an assessment of the program office and independent cost estimates for this decision milestone. To improve its schedule reliability, the program office is currently conducting a schedule risk analysis in order to estimate the amount of adequate reserve funds and schedule margin needed to deal with unexpected problems and setbacks. Finally, the NOAA Observing System Council submitted a prioritized list of GOES-R system requirements to the Commerce Undersecretary for approval. This list is expected to allow the program office to act quickly in deleting lower priority requirements in the event of severe technical challenges or shifting funding streams. While NOAA acknowledges the need to establish realistic cost and schedule estimates, several hurdles remain. As discussed earlier, the agency was considering—during the time of our review—reducing the requirements for the GOES-R program to mitigate the increased cost estimates for the program. Prior to this decision, the agency’s efforts to establish realistic cost estimates could not be fully effective in addressing this lesson. In addition, NOAA suspended the work being performed by its independent cost estimator. Now that the program scope and direction is being further defined, it will be important for the agency to restart this work. Further, the agency has not yet developed a process to evaluate and reconcile the independent and program office cost estimates once final program decisions are made. Without this process, the agency may lack the objectivity necessary to counter the optimism of program sponsors and is more likely to move forward with an unreliable estimate. Until it completes this activity, NOAA faces an increased risk that the GOES-R program will repeat the cost increases and schedule delays that have plagued past procurements. Space programs often experience unforeseen technical problems in the development of critical components as a result of having insufficient knowledge of the components and their supporting technologies prior to key decision points. One key decision point is when an agency decides on whether the component is sufficiently ready to proceed from a preliminary study phase into a development phase; this decision point results in the award of the development contract. Another key decision point occurs during the development phase when an agency decides whether the component is ready to proceed from design into production (also called the critical design review). Without sufficient technical readiness at these milestones, agencies could proceed into development contracts on components that are not well understood and enter into the production phase of development with technologies that are not yet mature. In 1997, NOAA began preliminary studies on technologies that could be used on the GOES-R instruments. These studies target existing technologies and assessed how they could be expanded for GOES-R. The program office is also conducting detailed trade-off studies on the integrated system to improve its ability to make decisions that balance performance, affordability, risk, and schedule. For instance, the program office is analyzing the potential architectures for the GOES-R constellation of satellites—the quantity and configuration of satellites, including how the instruments will be distributed over these satellites. These studies are expected to allow for a more mature definition of the system specifications. NOAA has also developed plans to have an independent review team assess project status on an annual basis once the overall system contract has been awarded. In particular, this team will review technical, programmatic, and management areas; identify any outstanding risks; and recommend corrective actions. This measure is designed to ensure that sufficient technical readiness has been reached prior to the critical design review milestone. The program office’s ongoing studies and plans are expected to provide greater insight into the technical requirements for key system components and to mitigate the risk of unforeseen problems in later acquisition phases. However, the progress currently being made on a key instrument currently under development—the Advanced Baseline Imager—has experienced technical problems and could be an indication of more problems to come in the future. These problems relate to, among other things, the design complexity of the instrument’s detectors and electronics. As a result, the contractor is experiencing negative cost and schedule performance trends. As of May 2006, the contractor incurred a total cost overrun of almost $6 million with the instrument’s development only 28 percent complete. In addition, from June 2005 to May 2006, it was unable to complete approximately $3.3 million worth of work. Unless risk mitigation actions are aggressively pursued to reverse these trends, we project the cost overrun at completion to be about $23 million. While NOAA expects to make a decision on whether to move the instrument into production (a milestone called the critical design review) in January 2007, the contractor’s current performance raises questions as to whether the instrument designs will be sufficiently mature by that time. Further, the agency does not have a process to validate the level of technical maturity achieved on this instrument or to determine whether the contractor has implemented sound management and process engineering to ensure that the appropriate level of technical readiness can be achieved prior to the decision milestone. Until it does so, NOAA risks making a poor decision based on inaccurate or insufficient information—which could lead to unforeseen technical problems in the development of this instrument. In the past, we have reported on poor performance in the management of satellite acquisitions. The key drivers of poor management included inadequate systems engineering and earned value management capabilities, unsuitable allocation of contract award fees, inadequate levels of management reserve, and inefficient decision-making and reporting structure within the program office. NOAA has taken numerous steps to restructure its management approach on the GOES-R procurement in an effort to improve performance and to avoid past mistakes. These steps include: The program office revised its staffing profile to provide for government staff to be located on-site at prime contractor and key subcontractor locations. The program office plans to increase the number of resident systems engineers from 31 to 54 to provide adequate government oversight of the contractor’s system engineering, including verification and validation of engineering designs at key decision points (such as the critical design review milestone). The program office has better defined the role and responsibilities of the program scientist, the individual who is expected to maintain an independent voice with regard to scientific matters and advise the program manager on related technical issues and risks. The program office also intends to add three resident specialists in earned value management to monitor contractor cost and schedule performance. NOAA has work under way to develop the GOES-R contract award fee structure and the award fee review board that is consistent with our recent findings, the Commerce Inspector General’s findings, and other best practices, such as designating a non-program executive as the fee-determining official to ensure objectivity in the allocation of award fees. NOAA and NASA have implemented a more integrated management approach that is designed to draw on NASA’s expertise in satellite acquisitions and increase NOAA’s involvement on all major components of the acquisition. The program office reported that it intended to establish a management reserve of 25 percent consistent with the recommendations of the Defense Science Board Report on Acquisition of National Security Space Programs. While these steps should provide more robust government oversight and independent analysis capabilities, more work remains to be done to fully address this lesson. Specifically, the program office has not determined the appropriate level of resources it needs to adequately track and oversee the program and the planned addition of three earned value management specialists may not be enough as acquisition activities increase. By contrast, after its recent problems and in response to the independent review team findings, NPOESS program officials plan to add 10 program staff dedicated to earned value, cost, and schedule analysis. An insufficient level of established capabilities in earned value management places the GOES-R program office at risk of making poor decisions based on inaccurate and potentially misleading information. Finally, while NOAA officials believe that assuming sole responsibility for the acquisition of GOES-R will improve their ability to manage the program effectively, this change also elevates NOAA’s risk for mission success. Specifically, NOAA is taking on its first major system acquisition and an increased risk due to its lack of experience. Until it fully addresses the lesson of ensuring an appropriate level of resources to oversee its contractor, NOAA faces an increased risk that the GOES-R program will repeat the management and contractor performance shortfalls that have plagued past procurements. We and others have reported on NOAA’s significant deficiencies in its senior executive oversight of NPOESS. The lack of timely decisions and regular involvement of senior executive management was a critical factor in the program’s rapid cost and schedule growth. NOAA formed its program management council in response to the lack of adequate senior executive oversight on NPOESS. In particular, this council is expected to provide regular reviews and assessments of selected NOAA programs and projects—the first of which is the GOES-R program. The council is headed by the NOAA Deputy Undersecretary and includes senior officials from Commerce and NASA. The council is expected to hold meetings to discuss GOES-R program status on a monthly basis and to approve the program’s entry into subsequent acquisition phases at key decision milestones—including contract award and critical design reviews, among others. Since its establishment in January 2006, the council has met regularly and has established a mechanism for tracking action items to closure. The establishment of the NOAA Program Management Council is a positive action that should support the agency’s senior-level governance of the GOES-R program. In moving forward, it is important that this council continue to meet on a regular basis and exercise diligence in questioning the data presented to it and making difficult decisions. In particular, it will be essential that the results of all preliminary studies and independent assessments on technical maturity of the system and its components be reviewed by this council so that an informed decision can be made about the level of technical complexity it is taking on when proceeding past these key decision milestones. In light of the recent uncertainty regarding the future scope and cost of the GOES-R program, the council’s governance will be critical in making those difficult decisions in a timely manner. To improve NOAA’s ability to effectively manage the GOES-R procurement, in our accompanying report, we recommended that the Secretary direct its NOAA Program Management Council to take the following three actions: Once the scope of the program has been finalized, establish a process for objectively evaluating and reconciling the government and independent life cycle cost estimates. Perform a comprehensive review of the Advanced Baseline Imager, using system engineering experts, to determine the level of technical maturity achieved on the instrument, to assess whether the contractor has implemented sound management and process engineering, and to assert that the technology is sufficiently mature before moving the instrument into production. Seek assistance from an independent review team to determine the appropriate level of resources needed at the program office to adequately track and oversee the contractor’s earned value management. Among other things, the program office should be able to perform a comprehensive integrated baseline review after system development contract award, provide surveillance of contractor earned value management systems, and perform project scheduling analyses and cost estimates. In written comments, Commerce agreed with our recommendations and provided information on its plans to implement our recommendations. In particular, Commerce intends to establish a process for evaluating and reconciling the various cost estimates and to analyze this process and the results with an independent review team comprised of recognized satellite acquisition experts. The agency is also planning to have this independent review team provide assessments of the Advanced Baseline Imager’s technical maturity and the adequacy of the program management’s staffing plans. In summary, the procurement of the next series of geostationary environmental satellites—called the GOES-R series—is at a critical juncture. Recent concerns about the potential for cost growth on the GOES-R procurement have led the agency to reduce the scope of requirements for the satellite series. According to NOAA officials, the current plans call for acquiring 2 satellites and moving away from a technically complex new instrument in favor of existing technologies. While reducing the technical complexity of the system prior to contract award and defining an affordable program are sound business practices, it will be important for NOAA to balance these actions with the agencies’ long term need for improving geostationary satellites over time. While NOAA is positioning itself to improve the acquisition of this system by incorporating the lessons learned from other satellite procurements including the need to establish realistic cost estimates, ensure sufficient government and contractor management, and obtain effective executive oversight, further steps remain to fully address selected lessons and thereby mitigate program risks. Specifically, NOAA has not yet developed a process to evaluate and reconcile the independent and government cost estimates. In addition, NOAA has not yet determined how it will ensure that a sufficient level of technical maturity will be achieved in time for an upcoming decision milestone or determined the appropriate level of resources it needs to adequately track and oversee the program using earned value management. Moreover, problems that are frequently experienced on major satellite acquisitions, including insufficient technical maturity, overly aggressive schedules, inadequate systems engineering capabilities, and insufficient management reserve will need to be closely monitored throughout this critical acquisition’s life cycle. To NOAA’s credit, it has begun to develop plans for implementing our recommendations. These plans include, among other things, establishing a process to evaluate and reconcile the various cost estimates and obtaining assessments from an independent review team on the technical maturity of a key instrument in development and the adequacy of the program management’s staffing plans. However, until it addresses these lessons, NOAA faces an increased risk that the GOES-R program will repeat the increased cost, schedule delays, and performance shortfalls that have plagued past procurements. Mr. Chairman, this concludes my statement. I would be happy to answer any questions that you or members of the committee may have at this time. If you have any questions on matters discussed in this testimony, please contact me at (202) 512-9286 or by e-mail at pownerd@gao.gov. Other key contributors to this testimony include Carol Cha, Neil Doherty, Nancy Glover, Kush Malhotra, Colleen Phillips, and Karen Richey. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The National Oceanic and Atmospheric Administration (NOAA) plans to procure the next generation of geostationary operational environmental satellites, called the Geostationary Operational Environmental Satellites-R series (GOES-R). This new series is considered critical to the United States' ability to maintain the continuity of data required for weather forecasting through the year 2028. GAO was asked to summarize and update its report previously issued to the Subcommittee on Environment, Technology, and Standards--Geostationary Operational Environmental Satellites: Steps Remain in Incorporating Lessons Learned from Other Satellite Programs, GAO-06-993 (Washington, D.C.: Sept. 6, 2006). This report (1) determines the status of and plans for the GOES-R series procurement, and (2) identifies and evaluates the actions that the program management team is taking to ensure that past problems experienced in procuring other satellite programs are not repeated. At the time of our review, NOAA was nearing the end of the preliminary design phase of its GOES-R system--which was estimated to cost $6.2 billion and scheduled to have the first satellite ready for launch in 2012. It expected to award a contract in August 2007 to develop this system. However, recent analyses of the GOES-R program cost--which in May 2006 the program office estimated could reach $11.4 billion--have led the agency to consider reducing the scope of requirements for the satellite series. Since our report was issued, NOAA officials told GAO that the agency has made a decision to reduce the scope of the program to a minimum of two satellites and to reduce the complexity of the program by canceling a technically complex instrument. NOAA has taken steps to implement lessons learned from past satellite programs, but more remains to be done. Prior satellite programs--including a prior GOES series, a polar-orbiting environmental satellite series, and various military satellite programs--often experienced technical challenges, cost overruns, and schedule delays. Key lessons from these programs include the need to (1) establish realistic cost and schedule estimates, (2) ensure sufficient technical readiness of the system's components prior to key decisions, (3) provide sufficient management at government and contractor levels, and (4) perform adequate senior executive oversight to ensure mission success. NOAA has established plans to address these lessons by conducting independent cost estimates, performing preliminary studies of key technologies, placing resident government offices at key contractor locations, and establishing a senior executive oversight committee. However, many steps remain to fully address these lessons. Until it completes these activities, NOAA faces an increased risk that the GOES-R program will repeat the increased cost, schedule delays, and performance shortfalls that have plagued past procurements.
The Superfund process begins with the discovery of a potentially hazardous site or notification to EPA of the possible release of hazardous substances, pollutants, or contaminants that may threaten human health or the environment. EPA’s regional offices may discover potentially hazardous waste sites, or such sites may come to EPA’s attention through reports from state agencies or citizens. As part of the site assessment process, EPA regional offices use a screening system called the Hazard Ranking System to guide decision making, and as needed, to numerically assess the site’s potential to pose a threat to human health or the environment. Those sites with sufficiently high scores are eligible to be proposed for listing on the NPL. EPA regions submit sites to EPA headquarters for possible listing on the NPL based on a variety of factors, including the availability of alternative state or federal programs that may be used to clean up the site. In addition, EPA officials have noted that, as a matter of policy, EPA seeks concurrence from the Governor of the state or state environmental agency head in which a site is located before listing the site. Sites that EPA proposes to list on the NPL are published in the Federal Register. After a period of public comment, EPA reviews the comments and decides whether to formally list the sites on the NPL. EPA places sites into the following six broad categories based on the type of activity at the site that led to the release of hazardous material: Manufacturing sites include wood preservation and treatment, metal finishing and coating, electronic equipment, and other types of manufacturing facilities. Mining sites include mining operations for metals or other substances. “Multiple” sites include sites with operations that fall into more than one of EPA’s categories. “Other” sites include sites that often have contaminated sediments or groundwater plumes with no identifiable source. Recycling sites include recycling operations for batteries, chemicals, and oil recovery. Waste management sites include landfills and other types of waste disposal facilities. After a site is listed on the NPL, EPA or a potentially responsible party (PRP) will generally begin the remedial cleanup process (see fig. 1) by conducting a two-part study of the site: (1) a remedial investigation to characterize site conditions and assess the risks to human health and the environment, among other actions, and (2) a feasibility study to evaluate various options to address the problems identified through the remedial investigation. The culmination of these studies is a record of decision (ROD) that identifies EPA’s selected remedy for addressing the contamination. A ROD typically lays out the planned cleanup activities for each operable unit of the site. EPA then plans the selected remedy during the remedial design phase, which is then followed by the remedial action phase when one or more remedial action projects are carried out. The number of operable units and planned remedial action projects at a site may increase or decrease over time as knowledge of site conditions changes. When all physical construction at a site is complete, all immediate threats have been addressed, and all long-term threats are under control, EPA generally considers the site to be construction complete. After construction completion, most sites then enter into the post-construction phase, which includes actions such as operation and maintenance during which the PRP or the state maintains the remedy such as groundwater restoration or a landfill cover, and EPA ensures that the remedy continues to protect human health and the environment. Eventually, when EPA and the state determine that no further site response is needed, EPA may delete the site from the NPL. According to a 2000 Federal Register notice, during the first 10 years of the Superfund program, the public often measured Superfund’s progress in cleaning up sites by the number of sites deleted from the NPL as compared to the number of sites on the NPL. However, according to the same Federal Register notice, this measure did not recognize the substantial construction and reduction of risk to human health and the environment that had occurred at some NPL sites. In response, EPA established the sitewide construction completion measure to more clearly communicate to the public progress in cleaning up sites on the NPL. Similarly, according to EPA documents, in 2010, to augment the sitewide construction completion measure and reflect the amount of work being done at Superfund sites, EPA developed and implemented a new performance measure, remedial action project completions. EPA includes these two performance measures in its Annual Performance Plan. The cleanup of nonfederal NPL sites is generally funded by one or a combination of the following methods; Potentially responsible parties are liable for conducting or paying for site cleanup of hazardous substances. In some cases, PRPs cannot be identified or may be unwilling or financially unable to perform the cleanup. CERCLA authorizes EPA to pay for cleanups at sites on the NPL, including these sites. To fund EPA-led cleanups at nonfederal NPL sites, among other Superfund program activities, CERCLA established the Hazardous Substance Superfund Trust Fund (Trust Fund). Historically, the Trust Fund was financed primarily by taxes on crude oil and certain chemicals, as well as an environmental tax on corporations. The authority to levy these taxes expired in 1995. Since fiscal year 2001, appropriations from the general fund have constituted the largest source of revenue for the Trust Fund. About 80 percent of the funds EPA spent to clean up nonfederal NPL sites from 1999 through 2013 came from annual appropriations. The remaining roughly 20 percent came from special accounts and state cost share. EPA has limited cost data where a PRP has conducted the cleanup. Under CERCLA, EPA is authorized to enter into settlement agreements with PRPs to pay for cleanups, and EPA may retain and use these funds for cleanups. Funds from these settlements may be deposited into site-specific subaccounts in the Trust Fund, which are referred to as “special accounts” and are generally used for future cleanup actions at the sites associated with a specific settlement, or to reimburse funds that EPA had previously used for response activities at these sites. According to EPA documents, in fiscal year 2013, there were a total of 993 open special accounts with an end of year balance of about $1.7 billion. Most of these funds could be used for a limited number of sites—for example, 3 percent of the open accounts representing 33 sites had about 56 percent of the total special account resources available. States are required to pay 10 percent of Trust Fund-financed remedial action cleanup costs and at least 50 percent of cleanup costs for facilities that were operated by the state or any political subdivision of the state at the time of any hazardous substances disposal at the facility. States may pay their share of response costs using cash, services, credit, or any combination thereof. Under CERCLA, states are also required to assure provision of all future maintenance of a Trust Fund-financed remedial action. In fiscal year 2014, EPA updated its information system for the Superfund program from CERCLIS to the Superfund Enterprise Management System (SEMS). According to EPA officials and documents, SEMS consolidated five stand-alone information systems and reporting tools into one system. These systems include CERCLIS, the Superfund Document Management System (SDMS), the Institutional Controls Tracking System (ICTS), the eFacts reporting tool, and ReportLink. CERCLIS contained information on, among other things, the contaminated sites’ cleanup status and cleanup milestones reached. The SDMS was a national electronic records collection system mostly with site cleanup records; ICTS was a database with legal data related to controlling access to sites; eFacts was a visual reporting tool that generated charts and graphs; and ReportLink was a traditional reporting tool that allowed regions and headquarters to share reports. According to EPA officials, SEMS should be more user-friendly and provide more mobility, thus allowing EPA regional staff to access the system in the field through various devices. Currently, regions are in the process of entering data for each site into SEMS. The process of converting entirely to SEMS has taken additional time because, according to EPA officials, the complexity of the new software and its difference from CERCLIS has created a more significant obstacle than anticipated. In addition, EPA officials stated that the agency will not be in a position to release data comparable to the data previously shared from CERCLIS until EPA officials are confident that all regions have mastered the software to update site schedules. According to EPA officials, SEMS should be fully operable in fiscal year 2016. According to our analysis of EPA and Census data, as of fiscal year 2013, an estimated 39 million people—about 13 percent of the U.S. population—lived within 3 miles of a nonfederal NPL site. Many of these people—an estimated 14 million—were either under the age of 18 or 65 years and older, which EPA describes as sensitive subpopulations. EPA Region 2 had the largest number of people living within 3 miles of a nonfederal NPL site—an estimated 10 million or about one-third of the region’s total population. Figure 2 provides information on the number of nonfederal NPL sites in each region and the estimated number of people that lived within 3 miles of those sites as of fiscal year 2013. The state of New York had the largest number of people living within 3 miles of nonfederal NPL sites—an estimated 6 million or about 29 percent of the state’s population. The state of New Jersey had the largest percentage of its estimated population living within 3 miles of a nonfederal NPL site— about 50 percent. Appendix II provides information on the estimated population that lived within 3 miles of a nonfederal NPL site, by state, as of fiscal year 2013. Annual federal appropriations (appropriations) to EPA’s Superfund program generally declined from about $2 billion to about $1.1 billion from fiscal years 1999 through 2013. EPA expenditures—from these federal appropriations—of site-specific cleanup funds (funds spent on remedial cleanup activities at nonfederal NPL sites) declined from about $0.7 billion to about $0.4 billion during the same time period. Because EPA prioritizes funding work that is ongoing, the decline in funding led EPA to delay the start of about one-third of the new remedial action projects that were ready to begin in a given fiscal year at nonfederal NPL sites from fiscal years 1999 through 2013, according to EPA officials. EPA spent the largest amount of cleanup funds in Region 2, which accounted for about 32 percent of cleanup funds spent at nonfederal NPL sites from fiscal years 1999 through 2013. During the same time period, EPA spent the majority of cleanup funds in seven states, with the most in New Jersey— over $2.0 billion or more than 25 percent of cleanup funds. According to our analysis of EPA data, the median per-site annual expenditures for cleanup at nonfederal NPL sites declined by about 48 percent from fiscal years 1999 through 2013, and EPA spent the majority of cleanup funds on an average of about 18 sites annually. Unless otherwise indicated, all dollars and percentage calculations are in constant 2013 dollars. From fiscal years 1999 through 2013, the annual appropriations to EPA’s Superfund program generally declined. Annual appropriations declined from about $2 billion to about $1.1 billion—about 45 percent—from fiscal years 1999 through 2013. Under the American Recovery and Reinvestment Act of 2009 (Recovery Act), EPA’s Superfund program received an additional $639 million in fiscal year 2009. Figure 3 shows the annual federal appropriations from fiscal years 1999 through 2013. EPA allocates annual appropriations to the Superfund program among the remedial program and other Superfund program areas, such as enforcement (see fig. 4). The remedial program generally funds cleanups of contaminated nonfederal NPL sites. EPA headquarters allocates funds for the remedial program to various categories: payroll and other administrative activities; preconstruction and other activities (such as remedial investigations and feasibility studies); and construction (such as remedial action projects) and post-construction activities. EPA allocates funds for preconstruction and other activities to its regional offices using a model based on a combination of historical allocations and a scoring system based on regions’ projects planned for the upcoming year. Each region decides how it will spend funds allocated by headquarters for its preconstruction and other remedial activities. EPA headquarters, in consultation with the regions, allocates site-specific cleanup funds for construction and post-construction activities between ongoing work and new remedial action projects. From fiscal years 1999 through 2013, the decline in appropriations to the Superfund program led EPA to decrease expenditures of site-specific cleanup funds on remedial cleanup activities from about $0.7 billion to about $0.4 billion. We define site-specific cleanup funds as those funds spent on preconstruction, construction, and postconstruction, which comprise remedial cleanup activities. Expenditures of Recovery Act funds account for the increase in cleanup funds expenditures from fiscal years 2009 through 2011. Figure 5 shows EPA’s expenditures of cleanup funds at nonfederal NPL sites for fiscal years 1999 through 2013. EPA policy prioritizes funding ongoing work over starting new remedial action projects. EPA officials explained that funding ongoing work is prioritized for a variety of reasons, such as the risk of recontamination and the additional cost of demobilizing and remobilizing equipment and infrastructure at a site. To establish funding priorities for new remedial action projects, EPA’s National Risk-Based Priority Panel (Panel)— comprised of EPA regional and headquarters program experts—ranks new remedial action projects based on their relative risk to human health and the environment. The Panel uses five criteria to evaluate proposed new remedial action projects: (1) risks to human population exposed (e.g., population size and proximity to contaminants), (2) contaminant stability (e.g., use and effectiveness of institutional controls like warning signs), (3) contaminant characteristics (e.g. concentration and toxicity), (4) threat to a significant environmental concern (e.g., endangered species or their critical habitat), and (5) program management considerations (e.g., high-profile projects). Each criterion is ranked on a weighted scale of one to five with the highest score for any criterion being five. According to EPA documents, the priority ranking process ensures that funding decisions for new remedial action projects are based on common evaluation criteria that emphasize risk to human health and the environment. The Panel then recommends the new projects to fund to the Assistant Administrator of the Office of Solid Waste and Emergency Response who makes the final funding decisions. A decline in funding delayed the start of some new remedial action projects, according to EPA officials. Over the 15-year time period from fiscal years 1999 through 2013, EPA generally did not fund all of the new remedial action projects that were ready to begin in a given fiscal year, according to our analysis of EPA data, (see table 1). During this time, EPA did not fund about one-third of the new remedial action projects in the year in which they were ready to start. According to EPA officials in headquarters and Region 2, delays in starting new remedial action projects can potentially lead to elevated costs. For example, site conditions can change, such as contaminants migrating at a groundwater site, which will require recharacterization of the location. Also the extent of the contamination may change or adjustments may be necessary to the remedy designs which could take additional time and money. In addition, there may be unmeasured economic costs to the community by delaying the productive reuse of a site, according to EPA officials. Due to an increase in funding from the Recovery Act, EPA started all new remedial action projects ready to start in fiscal years 2009 and 2010, and most new remedial action projects in fiscal year 2011, according to our analysis of EPA data. However, in fiscal year 2012, EPA did not fund and start any of the 21 new remedial action projects through the Panel process that were ready to begin that year. The 21 unfunded projects were estimated to have cost over $117 million in 2012, according to EPA officials. In fiscal year 2013, EPA did not fund 22 out of 30 projects due to priorities for declining funds as well. According to EPA officials, in that year, these unfunded projects were estimated to have cost approximately $101 million. EPA officials stated that they expect the trend of being unable to fund all new remedial action projects to continue. According to EPA officials, prior to funding new remedial action projects, EPA considers both the funds needed in the current fiscal year to begin the project and ongoing funds that will be required in subsequent fiscal years to complete the project. According to EPA officials, as annual appropriations have declined, EPA has generally relied on funds available from prior year Superfund appropriations to fund new remedial action projects and some other work. According to EPA officials, funds from prior year appropriations generally become available for use through deobligations and special account reclassifications. Typically, deobligations occur when EPA determines that some or all of the funds the agency originally obligated for a contract to conduct an activity are no longer needed (e.g., EPA will deobligate funds that it had previously obligated to construct a landfill cover because the final costs were less than originally anticipated). According to EPA officials, reclassifications occur when EPA uses special account funds to reimburse itself for its past expenditures of annually appropriated funds, which then makes the funds originally used for these activities available for the agency to use. Starting in fiscal year 2003, EPA began distributing deobligated funds in a 75/25 percent split so that headquarters kept 75 percent of the deobligated funds for national remedial program priorities, which have been, in large part, used to begin new remedial action projects, and returned 25 percent to the region that provided the deobligated funds. On average, EPA annually provided about $58 million in deobligated funds for construction and post-construction activities during fiscal years 2003 through 2013, according to our analysis of EPA data. According to EPA officials, deobligations are an unpredictable funding stream, and our analysis of EPA data indicates that the amount of deobligations and reclassifications provided for cleanup fluctuated during the fiscal years 2003 through 2013 time period, from a high in fiscal year 2003 of about $102 million to a low in fiscal year 2009 of about $32 million. EPA spent the most cleanup funds from annual appropriations on nonfederal NPL sites in Region 2 from fiscal years 1999 through 2013, according to our analysis of EPA data. EPA spent almost $2.5 billion in this region—which is about 32 percent of the total cleanup funds on nonfederal NPL sites during that time frame and over three times the cleanup funds spent on any other region. According to EPA officials, Region 2 has a significant number of large, EPA-funded sites that have required considerable expenditures to clean up over a long period of time. The agency does not expect this trend to continue, but anticipates that more cleanup funds will be devoted to the cleanup of large mining and sediment sites in the West. Region 8 received the second most in cleanup funds with about $0.7 billion over the same time period. Figure 6 shows EPA’s expenditure of cleanup funds at nonfederal NPL sites in each region from fiscal years 1999 through 2013. According to our analysis of EPA data, EPA spent the majority of nonfederal NPL cleanup funds in seven states—New Jersey, California, New York, Massachusetts, Idaho, Pennsylvania, and Florida—during the 15-year period from fiscal years 1999 through 2013. New Jersey sites received the most cleanup funds with over $2.0 billion (or more than 25 percent of cleanup funds over this period). The agency also spent the largest portion of Recovery Act funds in New Jersey. According to EPA officials, New Jersey has a large number of sites that do not have PRPs to perform the cleanup and needs federal appropriations to cleanup these sites. In addition, sites in areas of highly dense population like many in New Jersey cost more to cleanup, according to EPA officials. Agency officials expect the current level of expenditures in New Jersey to decline in the future because the cleanup at some of the sites will be completed. Figure 7 shows EPA’s expenditure of cleanup funds in the seven states from fiscal years 1999 through 2013. According to our analysis of EPA data, the median per-site annual expenditures on remedial cleanup activities at nonfederal NPL sites generally declined from fiscal years 1999 through 2013. The median per-site annual expenditures declined by about 48 percent from about $36,600 to about $19,100 from fiscal years 1999 through 2013. The decline was more pronounced in recent years, decreasing by about 35 percent from fiscal years 2009 through 2013, compared to about a 12 percent decline from fiscal years 1999 through 2003. Figure 8 shows the median per-site annual expenditures of cleanup funds from annual appropriations at nonfederal NPL sites from fiscal years 1999 through 2013. According to EPA officials, these declines mirror, with some lag time, declines in appropriations, the most significant of which occurred starting in fiscal year 2000 and then again starting in fiscal year 2011. In addition, the agency expects to see further declines in annual cleanup funds expenditures following the same pattern in the near future, according to EPA officials. Specifically, given recent declines in appropriations, EPA expects to see declines in expenditures after a short lag time, while outyear trends would depend on future appropriations. EPA spent the majority of cleanup funds on a few sites—on average about 18 sites—each year from fiscal years 1999 through 2013, according to our analysis of EPA data. The specific sites where EPA spent the majority of cleanup funds varied from year to year, but 6 sites were part of the 18 in more than half the years of the 15-year period— Vineland Chemical Company, Inc. (New Jersey), Bunker Hill Mining and Metallurgical Complex (Idaho), Welsbach and General Gas Mantle- Camden Radiation (New Jersey), Tar Creek-Ottawa County (Oklahoma), New Bedford (Massachusetts), and Federal Creosote (New Jersey). EPA spent at least $175 million from annual appropriations at each of these 6 sites over the 15 years. EPA’s costs to clean up sites differed depending on the type of site. According to our analysis of EPA data on expenditures of cleanup funds from annual appropriations, mining sites were the most expensive to clean up. From fiscal year 1999 through 2013, EPA spent, on average, from about 7 to about 52 times the annual amount per site at mining sites than at the other types of sites. For example, the average median per-site annual expenditure of cleanup funds was about $750,000 for mining sites compared to about $104,000 for “other” sites and to about $14,000 for waste management sites. According to EPA officials, mining sites are costly to clean up because, among other characteristics, they typically cover a large area and have many sources of contamination. One example of a mining site is the Bunker Hill Mining and Metallurgical Complex in Idaho where EPA spent almost $330 million to clean up part of the site from fiscal years 1999 through 2013. Figure 9 shows the average median per-site annual expenditure of cleanup funds from annual appropriations at nonfederal NPL sites by type of site from fiscal years 1999 through 2013. According to our analysis of EPA data, the total number of nonfederal sites on the NPL annually remained relatively constant, while remedial action project completions and construction completions generally declined during fiscal years 1999 through 2013. The total number of nonfederal sites on the NPL increased from 1,054 in fiscal year 1999 to 1,158 in fiscal year 2013 and averaged about 1,100 annually. According to our analysis of EPA data, the number of remedial action project completions at nonfederal NPL sites generally declined by about 37 percent during the 15-year period. Similarly, from fiscal years 1999 through 2013, the number of construction completions at nonfederal NPL sites generally declined by about 84 percent. From fiscal years 1999 through 2013, the number of new nonfederal sites added to the NPL and the number of nonfederal sites deleted each year from the NPL generally declined, while the total number of nonfederal sites on the NPL remained relatively constant, according to our analysis of EPA data. More specifically, during the fiscal years of our review, there was a period of decline in the number of sites added to the NPL followed by a few years where there was a slight increase. For example, the number of new nonfederal sites added to the NPL each year declined steadily from 37 sites in fiscal year 1999 to 12 in fiscal year 2007. According to EPA officials, there are several reasons for the decline in the number of new nonfederal sites added to the NPL. For example, some states may have been managing the cleanup of sites with their own state programs, especially if a PRP was identified to pay for the cleanup. Additional reasons for the decrease during this time period include: (1) funding constraints that led EPA to focus primarily on sites with actual human health threats and no other cleanup options, (2) use of the NPL as a mechanism of last resort, and (3) referral of sites assessed under Superfund to state cleanup programs. In contrast, from fiscal years 2008 through 2012, there was a general increase in the number of new nonfederal sites added to the NPL annually, according to our analysis of EPA data. In fiscal year 2008, EPA added 18 sites and by 2012, the number of sites added annually had increased to 24. According to EPA officials, the numbers may have increased from fiscal years 2008 through 2012, because the agency expanded its focus to consider NPL listing for sites with potential human health and environmental threats, and it shifted its policy to use the NPL when it was deemed the best approach for achieving site cleanup rather than using the NPL as a mechanism of last resort. Also, states’ funding for cleanup programs declined, and states agreed to add sites to the NPL where they encountered difficulty in getting a PRP to cooperate or where the PRP went bankrupt, according to EPA officials. Furthermore, these same officials stated that the increase in the number of new sites added to the NPL could be due to referrals from the Resource Conservation and Recovery Act program because of business bankruptcies, especially in the most recent years. In fiscal year 2013, however, the number of new nonfederal sites added to the NPL declined to 8, the lowest number since fiscal year 1999. In total, EPA added 304 nonfederal sites to the NPL—an average of about 20 sites annually—from fiscal years 1999 through 2013. Figure 10 summarizes the number of new nonfederal sites added to the NPL each year from fiscal years 1999 through 2013. In terms of the types of sites added to the NPL from fiscal years 1999 through 2013, the largest number of sites added to the list were manufacturing sites (120 sites or about 40 percent) followed by “other” sites (90 sites or about 30 percent). In addition, EPA added 35 mining sites (about 12 percent), 32 waste management sites (about 11 percent), 21 recycling sites (about 7 percent), and 6 “multiple” sites (about 2 percent)—sites that fell into more than one of these categories— according to our analysis of EPA data. During this time frame, the amount of time between when a site was proposed to be added to the NPL and when it was added to the NPL ranged from 2 months to over 18 years, with a median amount of time of about 6 months. According to EPA officials, there are a variety of reasons to explain why some sites take longer to add to the NPL. For example, EPA could propose a site to be added to the NPL and, in response to the Federal Register notice announcing the proposal, EPA could receive numerous, complex comments that required considerable time and EPA resources to address. In addition, a proposal to add a site to the NPL could act as an incentive for PRPs to resume negotiations with EPA or the state to clean up the site. Moreover, large PRPs with greater financial assets may request additional time to pursue other cleanup options; hire law firms and technical contractors to submit challenging comments to EPA on the proposal to add the site to the NPL; and support outreach efforts that generate state and local opposition to the proposal. EPA officials also noted that certain sites, such as recycling and dry cleaning, are generally added quickly to the NPL because other alternatives may not be available. From fiscal years 1999 through 2013, the number of nonfederal sites deleted from the NPL generally declined, according to our analysis of EPA data. EPA deleted 22 nonfederal sites in fiscal year 1999 and, in fiscal year 2013, EPA deleted only 6 nonfederal sites. In total, EPA deleted 185 nonfederal sites from the NPL during these years. According to EPA officials, the decline in the number of nonfederal sites deleted from the NPL is due to the decline in annual appropriations and the fact that the sites remaining on the NPL are more complex, and they take more time and money to clean up. The median number of years from the time a nonfederal site was added to the NPL to the time EPA deleted it from the NPL ranged from about 13 years for those sites deleted in fiscal year 1999, to about 25 years for those sites deleted in fiscal year 2013, with an average median of about 19 years. Region 2 had the largest number of nonfederal sites—41—deleted from the NPL, followed by Regions 6, 3, 4, and 5, which deleted 29, 25, 23, and 23 nonfederal sites, respectively. Figure 11 shows the number of nonfederal sites EPA deleted from the NPL each year from fiscal years 1999 through 2013. From fiscal years 1999 through 2013, according to our analysis of EPA data, the total number of nonfederal sites on the NPL remained relatively constant, and averaged about 1,100 sites annually. From fiscal years 1999 through 2013, the total number of nonfederal sites on the NPL increased less than 10 percent—from 1,054 sites to 1,158 sites as of the end of these fiscal years. In addition, the type of nonfederal sites on the NPL changed during this same time period. For example, in fiscal year 1999, there were 10 mining sites on the NPL or about 1 percent of all nonfederal NPL sites. By fiscal year 2013, there were 44 mining sites on the NPL, which was about 4 percent of all nonfederal NPL sites. Appendix III provides more detailed information from fiscal years 1999 through 2013 on the number of nonfederal sites on the NPL at the end of each fiscal year, following any additions and deletions; as well as the number of nonfederal sites on the NPL each fiscal year by type. According to our analysis of EPA data, from fiscal years 1999 through 2013, the number of remedial action project completions at nonfederal NPL sites declined by about 37 percent, and the length of time to complete the projects increased slightly. The number of remedial action project completions in each year gradually declined by about 59 percent from 116 projects (fiscal year 1999) to 47 projects (fiscal year 2010). For fiscal years 2011 through 2012, the number of remedial action project completions increased to 75 and 87, respectively. According to EPA officials, these increases were due to the increase of funds from the Recovery Act. In fiscal year 2013, the number of remedial action project completions dropped to 73. In total, 1,181 remedial action projects were completed from fiscal years 1999 through 2013. In general, according to EPA officials, the decline in remedial action project completions is due to the decline in appropriations and the complexity of current projects, which take longer to complete. These officials also stated that the decline in staffing, especially in the last few years, and particularly in the regions, had a negative impact on the Superfund remedial program and made it difficult to complete work. Figure 12 provides information on the number of remedial action project completions at nonfederal NPL sites from fiscal years 1999 through 2013. According to our analysis of EPA data, Region 2 had the highest number of remedial action project completions (242 projects or about 20 percent of the total project completions), followed by Regions 3, 5, and 4 at 171 projects (or about 14 percent), 140 projects (or about 12 percent), and 128 projects (or about 11 percent), respectively. New Jersey, Pennsylvania, and New York completed the most remedial action projects—over 100 projects in each state—during the 15-year time frame. In addition to fewer remedial action project completions, our analysis of EPA data also shows that the length of time to complete these projects increased slightly from one year to the next. From fiscal years 1999 through 2013, the average median length of time to complete these projects was about 3 years. In fiscal year 1999, the median amount of time to complete projects was about 2.6 years. Over time, the median amount of time gradually increased to almost 4 years in fiscal year 2013. Regions 6 and 3 had the lowest average median times of about 2 years to complete projects. In contrast, Region 10 had the highest average median time of over 5 years to complete projects. According to EPA officials, remedial action project completions are taking longer to complete because they are getting more complex. In addition, these officials stated that, as noted above, shortages in EPA regional staffing levels and a decline in state environmental agency personnel are causing delays throughout the Superfund program from site assessments to completion of remedial action projects. Similar to the decline in the number of remedial action project completions, from fiscal years 1999 through 2013, the number of construction completions at nonfederal NPL sites generally declined by about 84 percent, according to our analysis of EPA data. Specifically, fiscal years 1999 and 2000 had the largest number of construction completions at nonfederal NPL sites—80 sites each fiscal year. In contrast, in fiscal year 2013, the number of construction completions at nonfederal NPL sites declined to 13. During the 15-year time frame, 516 nonfederal NPL sites reached construction completion. According to EPA officials, the decline in the number of construction completions at nonfederal NPL sites is because, as noted above, the sites are getting more complex and difficult to clean up, funds available to perform the cleanup are declining, the number of sites available for construction completion have declined from fiscal years 1999 through 2013, and regional staff is declining. In addition, adverse weather conditions, such as excessive rain, and the discovery of new contaminants can delay progress at some sites, according to these same officials. Figure 13 shows the trend in the number of construction completions at nonfederal NPL sites from fiscal years 1999 through 2013. In fiscal year 1999, the median number of years to reach construction completion was about 12 years, and in fiscal year 2013, it was about 16 years. During the 15-year period, Region 2 had the largest number of construction completions at nonfederal NPL sites, 104, followed by Region 5 with 95 sites. According to EPA officials, one of the reasons for the decrease in the number of construction completions was the decline from fiscal years 1999 through 2013 in the total number of nonfederal sites that were available for construction completion. Our analysis of EPA data indicates that, while the number of sites available for construction completion has declined, so too has the number of construction completions compared to those sites available for construction completion as shown in figure 14. For example, in fiscal year 1999, there were 80 construction completions at nonfederal NPL sites out of 630 available for construction completion (or about 13 percent). However, in fiscal year 2013, there were 13 construction completions out of 428 (or about 3 percent). We requested comments on a draft of this product from EPA. EPA did not provide written comments. In an e-mail received on September 11, 2015, the Audit Liaison stated that EPA agreed with our report’s findings and provided technical comments. We incorporated these technical comments, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Administrator of EPA, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix IV. This appendix provides information on the objectives, scope of work, and the methodology used to determine, for fiscal years 1999 through 2013, the trends in (1) the annual federal appropriations to the Superfund program and Environmental Protection Agency (EPA) expenditures for remedial cleanup activities at nonfederal sites on the National Priorities List (NPL) and (2) the number of nonfederal sites on the NPL, the number of remedial action project completions, and the number of construction completions at nonfederal NPL sites. To determine the trend in the annual federal appropriations to the Superfund program and EPA expenditures for remedial cleanup activities at nonfederal sites on the NPL from fiscal years 1999 through 2013, we reviewed and analyzed Superfund program funding data. In addition, we analyzed expenditure data from EPA’s Integrated Financial Management System for fiscal years 1999 through 2003, and from its replacement financial system Compass, for fiscal years 2004 through 2013. These data included Superfund agency expenditures from annual appropriations, including American Recovery and Reinvestment Act of 2009 funds, but they excluded expenditures of Homeland Security Supplemental appropriation, special accounts, and state cost share funds, as well as funds received from other agencies (i.e., funds-in interagency agreements and intergovernmental personnel agreements) and expenditures in support of Brownfields program activities. EPA provided agencywide data for site and nonsite expenditures segregated by expenditure category and source of funding. EPA provided the financial data in nominal values, which we converted to constant 2013 dollars. We analyzed these data to identify the trend in total expenditures of annual federal appropriations for, among other things, the remedial action cleanup process and the median expenditure by site and type of site (e.g., mining and manufacturing). The scope of our analyses for both objectives varied from year-to-year because we examined only nonfederal sites that were “active,” i.e., on the NPL at any given point during the fiscal year. We also obtained and analyzed information on the nonfederal NPL sites that, according to EPA, had remedial action projects that were ready to begin but were not funded because of resource constraints. To determine the trend in the number of nonfederal sites on the NPL, the number of remedial action project completions, and the number of construction completions at nonfederal NPL sites from fiscal years 1999 through 2013, we analyzed EPA’s program data from fiscal years 1999 through 2013. At the time of our analysis, EPA officials stated that 2013 would be the most recent year with complete and stable data, and these data were available in the agency’s Comprehensive Environmental Response, Compensation, and Liability Information System (CERCLIS) database. As of June 2015, EPA officials stated that the agency was not in a position to release data for fiscal year 2014 that would be comparable to the fiscal years 1999 through 2013 data until fiscal year 2016. However, in July 2015, EPA officials were able to provide fiscal year 2014 data on the number of new nonfederal sites added to the NPL, nonfederal sites deleted from the NPL, remedial action project completions, and construction completions because the agency gathers these data through manual data requests for which each EPA regional office certifies the data that it provides to EPA Headquarters. We obtained data from EPA for all of the nonfederal sites that were or had been on the NPL, as of the end of fiscal year 2013. One site, the Ringwood Mines/Landfill site, had two final dates—the date a site is formally added to the NPL via a Federal Register notice—because the site was restored to the NPL after it had been deleted. We used the latest final date that was provided by EPA in our analysis. The Ringwood Mine/Landfill site was included in the results of our analysis of new nonfederal sites added to the NPL and number of nonfederal sites on the NPL, but we excluded it from our analysis of the median amount of time between when a site is proposed and when it is added to the NPL. Our analysis included nonfederal sites that were on the NPL, including sites that had been deleted, through fiscal year 2013. We analyzed site-level data for nonfederal NPL sites to summarize trends in the number of new nonfederal sites added to the NPL and the number of nonfederal sites that reached construction completion and deletion. We also analyzed the number of remedial action project completions in each of the 15 years in our analysis. Our analysis did not include (1) four sites that started off on the NPL but were deferred to another authority and deleted from the NPL and (2) five sites that were proposed but never became final on the NPL. To address both objectives, we reviewed agency documents including, for example, the Superfund Program Implementation Manual, and we interviewed EPA officials in headquarters and Region 2 to discuss the trends we identified in our analyses and potential reasons for these trends. We spoke with EPA staff in Region 2 because Region 2 sites received the most site-specific cleanup funds for remedial cleanup activities, Region 2 had the state—New York—with the largest population living within a 3-mile buffer of its nonfederal NPL sites, as of fiscal year 2013, and included the state—New Jersey—that had the largest number of nonfederal NPL sites in fiscal year 2013. We also interviewed knowledgeable stakeholders from the Association of State and Territorial Solid Waste Management Officials and the National Academy of Sciences. Additionally, we reviewed prior GAO reports on EPA’s Superfund program. A list of related GAO products is included at the end of this report. To assess the reliability of the data from the EPA databases used in this report, we reviewed relevant documents, such as the 2013 CERCLIS data entry control plan guidance and regions’ CERCLIS data entry control plans; examined the data to identify obvious errors or inconsistencies; compared the data that we received to publicly available data; and interviewed EPA officials. We determined the data to be sufficiently reliable for the purposes of this report. In addition, to determine the estimated population that lived within 3 miles of nonfederal sites on the NPL, we generally relied on EPA’s Office of Solid Waste and Emergency Response methodology and analyzed data from (1) CERCLIS on the 1,158 nonfederal sites on the NPL in the 50 states and U.S. territories (Guam, Puerto Rico, and the Virgin Islands), as of the end of fiscal year 2013, and (2) Census from the 2009 through 2013 American Community Survey 5-year estimate for the 1,141 nonfederal sites in the 50 states and the District of Columbia. A circular site boundary, equal to the site acreage, was modeled around the latitude/longitude for each site and then a 3-mile buffer ring was placed around the site boundary. For the 138 sites in 34 states that EPA did not have acreage information, a circular site boundary was modeled around the latitude/longitude point, and then a 3-mile buffer ring was placed around the point. American Community Survey data was then collected for each block group with a centroid that fell within the 3-mile area and rounded. Percentage numbers were rounded to the nearest whole percent. We conducted this performance audit from October 2014 to September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings based on our audit objectives. Estimated state population 65 years and older that lived within 3 miles of a nonfederal NPL site (thousands) 29 . . . . . . . . . . . . . . . . . . . . Appendix III provides information from fiscal years 1999 through 2013 on the number of nonfederal sites on the National Priorities List (NPL) at the beginning and end of the fiscal year after accounting for new sites added to and existing sites deleted from the NPL during the fiscal year (table 2); and the number of nonfederal sites on the NPL by site type for each fiscal year (table 3). In addition to the individual named above, Vincent Price and Diane Raynes (Assistant Directors), Antoinette Capaccio, Katherine Carter, John Delicath, Michele Fejfar, Diana C. Goody, Catherine Hurley, John Mingus, David Moreno, and Dan Royer made key contributions to this report. Hazardous Waste Cleanup: Observations on States’ Role, Liabilities at DOD and Hardrock Mining Sites, and Litigation Issues. GAO-13-633T. Washington, D.C.: May 22, 2013. Superfund: EPA Should Take Steps to Improve Its Management of Alternatives to Placing Sites on the National Priorities List. GAO-13-252. Washington, D.C.: April 9, 2013. Superfund: Status of EPA’s Efforts to Improve Its Management and Oversight of Special Accounts. GAO-12-109. Washington, D.C.: January 18, 2012. Superfund: Information on the Nature and Costs of Cleanup Activities at Three Landfills in the Gulf Coast Region. GAO-11-287R. Washington, D.C.: February 18, 2011. Superfund: EPA’s Costs to Remediate Existing and Future Sites Will Likely Exceed Current Funding Levels. GAO-10-857T. Washington, D.C.: June 22, 2010. Superfund: EPA’s Estimated Costs to Remediate Existing Sites Exceed Current Funding Levels, and More Sites Are Expected to Be Added to the National Priorities List. GAO-10-380. Washington, D.C.: May 6, 2010. Superfund: Litigation Has Decreased and EPA Needs Better Information on Site Cleanup and Cost Issues to Estimate Future Program Funding Requirements. GAO-09-656. Washington, D.C.: July 15, 2009.
Under the Superfund program, EPA places some of the most seriously contaminated sites on the NPL. At the end of fiscal year 2013, nonfederal sites made up about 90 percent of these sites. At these sites, EPA undertakes remedial action projects to permanently and significantly reduce contamination. Remedial action projects can take a considerable amount of time and money, depending on the nature of the contamination and other site-specific factors. In GAO's 2010 report on cleanup at nonfederal NPL sites, GAO found that EPA's Superfund program appropriations were generally declining, and limited funding had delayed remedial cleanup activities at some of these sites. GAO was asked to review the status of the cleanup of nonfederal NPL sites. This report examines, for fiscal years 1999 through 2013, the trends in (1) the annual federal appropriations to the Superfund program and EPA expenditures for remedial cleanup activities at nonfederal sites on the NPL; and (2) the number of nonfederal sites on the NPL, the number of remedial action project completions, and the number of construction completions at nonfederal NPL sites. GAO analyzed Superfund program and expenditure data from fiscal years 1999 through 2013 (most recent year with complete data available), reviewed EPA documents, and interviewed EPA officials. Annual federal appropriations to the Environmental Protection Agency's (EPA) Superfund program generally declined from about $2 billion to about $1.1 billion in constant 2013 dollars from fiscal years 1999 through 2013. EPA expenditures—from these federal appropriations—of site-specific cleanup funds on remedial cleanup activities at nonfederal National Priorities List (NPL) sites declined from about $0.7 billion to about $0.4 billion during the same time period. Remedial cleanup activities include remedial investigations, feasibility studies, and remedial action projects (actions taken to clean up a site). EPA spent the largest amount of cleanup funds in Region 2, which accounted for about 32 percent of cleanup funds spent at nonfederal NPL sites during this 15-year period. The majority of cleanup funds was spent in seven states, with the most funds spent in New Jersey—over $2.0 billion in constant 2013 dollars, or more than 25 percent of cleanup funds. From fiscal years 1999 through 2013, the total number of nonfederal sites on the NPL annually remained relatively constant, while the number of remedial action project completions and construction completions generally declined. Remedial action project completions generally occur when the physical work is finished and the cleanup objectives of the remedial action project are achieved. Construction completion occurs when all physical construction at a site is complete, all immediate threats have been addressed, and all long-term threats are under control. Multiple remedial action projects may need to be completed before a site reaches construction completion. The total number of nonfederal sites on the NPL increased from 1,054 in fiscal year 1999 to 1,158 in fiscal year 2013, and averaged about 1,100 annually. The number of remedial action project completions at nonfederal NPL sites generally declined by about 37 percent during the 15-year period. Similarly, the number of construction completions at nonfederal NPL sites generally declined by about 84 percent during the same period. The figure below shows the number of completions during this period. GAO is not making any recommendations in this report. EPA agreed with GAO's findings.
If a vehicle leaves the roadway, ideally, the roadside would be clear of all obstructions and be traversable. However, because there are numerous roadside areas that cannot be practically cleared of all fixed objects or that have sharp declines, roadside safety hardware can be used to reduce the consequences of a departure from the roadway. The goal of roadside safety hardware is met when the hardware contains, redirects, or decelerates the vehicle to a safe stop without causing serious injury to the vehicle’s occupants or other people. General categories for roadside safety hardware are: 1) longitudinal barriers, which include items such as guardrails and cable barriers and are intended to reduce the probability of a vehicle’s striking an object or terrain feature off the roadway that is less forgiving than the barrier; 2) bridge barriers, which function as longitudinal barriers but are specific to bridge design; 3) barrier terminals/crash cushions, which include items like guardrail end terminals that are intended to absorb or divert the energy of a crash into the end of a longitudinal barrier; 4) support structures, such as sign supports, which are designed to break or yield when struck by a vehicle; and 5) work zone devices, which include a variety of items used in a work zone that are temporary in nature. See figure 1 below for a depiction of these types of hardware. DOT’s primary mission is to ensure the safety of the traveling public. The strategic goals of the FHWA, within DOT, are to provide safe, reliable, effective, and sustainable mobility for users of the nation’s highway system. FHWA distributes about $40 billion to the states each year through the federal-aid highway program (generally providing 80 to 90 percent of projects’ costs on designated federal-aid highways) for highway and bridge infrastructure, a portion of which is spent on safety improvements including roadside safety hardware. FHWA issues regulations and guidelines, and can perform direct oversight for projects that use federal funds, including those on the National Highway System (NHS). The NHS consists of approximately 220,000 miles of the nearly 1- million miles of roadways eligible for federal aid. The NHS includes the 47,000-mile Interstate Highway System as well as other roadways, connectors important to U.S. strategic defense policy, and connectors to major intermodal facilities, such as airports or transit hubs. FHWA administers and oversees the federal-aid highway program through FHWA’s division offices located in all 50 states, the District of Columbia and Puerto Rico. As part of FHWA’s risk-based oversight, division offices and state DOTs have “Stewardship and Oversight Agreements” that specify the terms under which states assume oversight responsibility for federally funded projects. Under FHWA’s risk-based stewardship and oversight program, FHWA is responsible for determining projects that have an elevated risk or projects where FHWA involvement can enhance meeting program or project objectives. This involvement may include conducting oversight of the entire project or a specific phase or element of the project. For all projects that FHWA does not categorize as having an elevated risk, responsibility for oversight of design and construction of projects is generally assumed by the states. For each federally funded project, FHWA enters into project agreements with the state in which the state agrees to adhere to all applicable federal laws and regulations. Section 109 of Title 23 of the United States Code directs DOT to work in partnership with the state DOTs to develop standards for the NHS and other roadway systems. To fulfill this responsibility, FHWA works in partnership with AASHTO to advance many of its mission areas. AASHTO is an association representing highway and transportation departments in the 50 states, the District of Columbia, and Puerto Rico. AASHTO serves as a liaison between state departments of transportation and the federal government and develops and maintains design standards for roadways, bridges, and highway materials. FHWA incorporates some AASHTO standards into federal regulation, for example, the Policy on Geometric Design of Highways and Streets (Green Book), which lists design criteria across a range of roadway types, from rural roads to freeways. FHWA has an ex-officio, non-voting role on AASHTO committees. In cooperation, AASHTO and FHWA sponsor research on common transportation issues through the Transportation Research Board’s National Cooperative Highway Research Program (NCHRP), including research on roadside safety hardware. NCHRP studies are funded by the states from federal-aid highway program funds apportioned to them. Roadside safety hardware is developed by manufacturers, states, and universities and can be crash tested to assess its safety performance. The nine U.S. crash-testing labs that are accredited and recognized by FHWA can conduct full-scale crash testing where roadside safety hardware is hit by a vehicle to determine whether it meets AASHTO- accepted standards for roadside safety hardware. Of the nine labs, three labs are independently operated; two are owned by companies that also develop roadside safety hardware; three labs are affiliated with universities; and the final lab is operated by a state department of transportation. Representatives from roadside safety hardware developers, crash test labs, academia, and state and federal transportation departments participate in Task Force 13, a committee whose mission is to develop specifications for new materials and technologies identified for use in highway construction projects. As part of this mission, Task Force 13 develops, recommends, and promotes standards and specifications for roadside safety hardware. The two ways of assessing this performance are lab crash testing and in- service performance evaluations. Crash tests can quantify performance for specific conditions that represent the “worst practical conditions” in terms of the speed and angle of the vehicle hitting the hardware. The performance of hardware is evaluated in terms of risk to the vehicle occupants and structural adequacy of the hardware, among other items. AASHTO currently has two sets of crash-testing standards that it endorses for installing roadside safety hardware: NCHRP Report 350 standards adopted in 1993, which are being phased out, and the Manual for Assessing Safety Hardware (MASH) adopted in 2009. AASHTO developed the MASH standards as an update to NCHRP Report 350, and these standards contain revised criteria for crash tests of roadside safety hardware. Updates in MASH include: increases in the size and weight of several test vehicles to better match the current vehicle fleet, changes to the number of tests and impact conditions, and more objective evaluation criteria. AASHTO has also sponsored research on how to assess the performance of roadside safety hardware once it has been installed, an assessment that is referred to as in-service performance evaluation (ISPE). In 2003, NCHRP Report 490: In-Service Performance of Traffic Barriers, published findings of research and suggested practical procedures for conducting ISPEs. In-service performance evaluations are a way of assessing roadside safety hardware’s performance in “real- world” scenarios not captured in a crash-test setting. For example, performance may be affected by installation factors, such as slope and grade of roadway and soil type, and maintenance conditions, including whether the hardware has degraded over time from weather or accidents, none of which are captured in crash-testing. FHWA oversees and promotes the installation of crash-tested roadside safety hardware through guidance and policy directives to the states and by issuing letters to roadside safety hardware developers that submit crash-test results for review by FHWA. We found that states generally require crash testing; however, some inconsistencies across state policies and practices exist, and a movement to adopt the improved MASH standards has been slow. In 2016, FHWA and AASHTO released a new Joint Implementation Plan stating that states should transition to installing only MASH-standard-tested roadside safety hardware in phases by 2019. However, some concerns have been raised, and FHWA has not developed a plan to track progress of the states and industry in meeting the new dates. FHWA has contracted for a full examination of its roadside safety hardware oversight processes and expects a report with recommendations for potential changes to these processes in the summer of 2016. In line with its overall safety mission as well as that of DOT, FHWA encourages states to install appropriately crash-tested roadside safety hardware. By law, FHWA is required to ensure that highway projects designed and constructed with federal funds are safe. FHWA’s Office of Safety’s specific mission includes advancing the use of scientific methods and data-driven decisions. Also, according to FHWA’s Office of Safety website, roadway departure is one of its focus areas. FHWA has issued policy that roadside safety hardware should demonstrate acceptable crashworthy performance in order to be used on the NHS and receive federal-aid reimbursement. To encourage this outcome, FHWA issues guidance and policy directives to the states and industry. For example, in 2015 FHWA issued a memo that encouraged state agencies to upgrade their existing installations of guardrail end terminals that had been tested to standards issued prior to the NCHRP 350 standards, which were adopted in 1993. Congress directs DOT to work in partnership with the state DOTs to develop standards for the NHS and other systems. FHWA works in cooperation with AASHTO to promote state adherence to crash-testing standards through joint implementation plans. These plans are voted on and must be approved by a majority of AASHTO’s member states. FHWA and AASHTO issued joint implementation plans in 2009 and 2016 that provided guidance for states to follow in transitioning to updated crash test standards. In addition to providing guidance to states, FHWA also issues federal-aid reimbursement eligibility letters to roadside safety hardware developers that submit their product information, crash test results and other supporting documentation for review. Although it is called a federal-aid reimbursement eligibility letter, FHWA’s eligibility letter is not required, and federal-aid reimbursement is not contingent upon receipt of an eligibility letter. FHWA issues these letters as a service to the states to provide states with information on the crashworthiness of roadside safety hardware. FHWA posts the letters on its website creating a central repository of information for states to know which roadside safety hardware has been tested. FHWA officials stated that when they receive a request from a developer for an eligibility letter, the request includes information on the design of the roadside safety hardware device, the crash testing report, pictures and videos of the crash testing, and other information. FHWA officials told us that they follow up with the developer or test lab if they have questions about any of the data or video evidence. FHWA also advises developers that if modifications are made to a roadside safety hardware device that has received an eligibility letter, the developer must resubmit information to FHWA for review. Though it is FHWA’s policy that all roadside safety hardware installed on the NHS should be crash tested, crash testing is not a requirement for states to receive federal-aid highway program funds because this policy was never incorporated into regulation or other formal agreements with the states (such as FHWA’s project agreements with states). According to FHWA officials, in the absence of a federal statutory or regulatory requirement for crash testing, FHWA cannot withhold federal funding for federal-aid highway-program projects’ approvals to a state should the state choose to install roadside safety hardware that had not been tested to meet appropriate crash test standards. During our review, we found a widespread misperception among state DOT and FHWA division office officials we spoke with that crash testing of roadside safety hardware to applicable standards and obtaining an FHWA eligibility letter was required in order to receive federal reimbursement. In 1991, Congress instructed DOT to issue a final rule regarding revised standards for acceptable roadside safety hardware. In 1993, FHWA issued a rule that incorporated crash test standards into regulation by reference as guidance. FHWA stated at the time that it lacked sufficient knowledge to be more prescriptive about roadside safety hardware in general and chose not to make crash testing mandatory through regulation. FHWA has not issued a proposed rulemaking to require crash test standards since. FHWA officials told us that they believe encouraging state compliance is more effective than requiring it through a rulemaking because the current partnership with AASHTO garners support from states, and a federal rulemaking can take many years to complete. Most states that responded to our survey told us that roadside safety hardware installed on the NHS is required to be crash tested, and many of those states said they had processes in place to limit installation of roadside safety hardware to those that have obtained FHWA eligibility letters. Nearly all, 43 of the 44 states that responded to our survey, told us that crash testing to MASH or NCHRP Report 350 standards is required in their state for major categories of roadside safety hardware. In addition, 38 of 44 states also responded that they maintain lists of “approved” or “qualified” products from which contractors can choose roadside safety hardware for installation. Furthermore, 32 of the 38 states with these lists responded that all roadside safety hardware on their qualified or approved product lists have an FHWA eligibility letter. While our survey results indicate that FHWA’s guidance has been widely implemented at the state level, they also indicate some inconsistencies in state policies and some misperception about FHWA policy. First, 10 states responded that that they do not have a specific law, regulation, or policy document that establishes crash-testing requirements. In follow-up responses, four of these states told us that they do not have documented requirements because they believe FHWA requires crash-testing of roadside safety hardware and that the FHWA requirement governs roadside safety hardware in their state. If a state’s policy is only to refer to a federal requirement that does not exist, then effectively no requirements govern crash testing in that state. Second, while most states approve installation of only roadside safety hardware that has received an FHWA eligibility letter, not all states do so. For example, 11 states reported that they have conducted their own crash testing in the past 10 years, and 6 of the 11 responded that they do not always submit those roadside safety hardware devices for FHWA review prior to approving devices for installation. Officials from one state told us they only submit results for eligibility letter review when they believe the device is likely to be used by other states. Federal standards for internal control highlight the need for agencies to design control activities—policies, procedures, techniques, and mechanisms—to achieve objectives and address related risks. In June 2012, FHWA issued a memo indicating that division offices should encourage states to have written policies that incorporate AASHTO’s guidance on current roadside safety information and operating practices. However, FHWA has not directed its division offices to help ensure that states have policies governing crash testing of roadside safety hardware installed on their roadways as part of this procedural review. Officials in the five FHWA division offices we interviewed told us they have a procedure for reviewing states’ standards and design specifications, which could include states’ standards and requirements for roadside safety hardware. However, officials in FHWA division offices we interviewed said that they generally do not examine roadside safety hardware practices on individual projects as part of FHWA’s risk-based oversight. Officials in one division office noted that topics like pedestrian safety would be a higher priority for the division office because officials said there are more pedestrian deaths than there are deaths from roadside safety hardware. Division office officials stated that they rely on the state to ensure that what is incorporated in the project meets state standards, and officials from four out of five division offices stated that they do not verify that states are installing state-approved products. Furthermore, FHWA’s Office of Safety officials told us that they do not monitor and collect information on state policies with respect to roadside safety hardware. The absence of written requirements at the state level and inconsistencies in state practices could, in some cases, result in the risk of reduced assurance that states are fully implementing appropriate crash-testing standards. According to FHWA and AASHTO, MASH crash test standards are an improved set of standards because they better reflect the current vehicle fleet, which has become heavier and taller over the past 25 years. Two studies compared the NCHRP Report 350 standard testing to the MASH test standards. The results of these studies indicated that in some cases MASH test standards provide a more rigorous evaluation for crash testing roadside safety hardware. First, in 2010, NCHRP conducted an evaluation of existing roadside safety hardware devices approved under NCHRP Report 350. Re-testing these devices and evaluating performance using the criteria in MASH revealed that 6 of the 21 tests performed on NCHRP Report 350-compliant roadside safety hardware devices did not pass. Second, in September 2015, a joint AASHTO/FHWA review of guardrail end terminals concluded that the MASH crash test standards incorporate tests relevant for guardrail end terminals that are not included in NCHRP Report 350 test standards. Specifically, the study found that NCHRP Report 350 standards do not fully address performance issues in the areas of side and shallow-angle impacts. The study recommended fully implementing MASH for new installations of guardrail end terminals. States have been slow in transitioning to implement the MASH crash-test standards. In 2009 AASHTO and FHWA issued a Joint Implementation Plan adopting MASH as the updated crash-test standards necessary for an applicant to receive an FHWA eligibility letter for a new roadside safety hardware device. However, this plan said that states could continue to install roadside safety hardware tested to the previous NCHRP Report 350 standards. Therefore, manufacturers could continue to produce, and states could continue to install roadside safety hardware that had already received an eligibility letter without retesting to MASH crash test standards. In January 2016, FHWA and AASHTO released a new Joint Implementation Plan stating that states should transition to installing only MASH-standard-tested roadside safety hardware. According to the plan, FHWA will no longer issue eligibility letters for new or modified roadside safety hardware tested to standards other than the MASH crash-test standards. The 2016 Joint Implementation Plan calls for states to complete the transition to the MASH crash-test standards between December 2017 and December 2019, depending on the type of hardware. (See table 1 below.) If states comply with the 2016 Joint Implementation Plan’s dates for transitioning roadside safety hardware installations to meet the MASH crash-test standards, this transition will be 8 to 10 years after the 2009 Joint Implementation Plan, and states may continue to install non-MASH-tested hardware on the NHS until December 2017 at the earliest. FHWA officials noted that roadside safety hardware often remains on the roads for at least 20 years before being replaced due to aging, so hardware tested to the older NCHRP Report 350 standards could be on the roads for years to come. However, at this point it is not clear that states will be able to comply with the dates set in the plan. In order to meet the transition dates, industry will have to develop and test products to the MASH standards that have not previously been tested to these standards, and FHWA will have to review applications-for-eligibility letters for developers that request them. States will then have to make changes to either their design and specification policies or approved lists of products to incorporate only MASH-tested roadside safety hardware. Industry, to this point, has been slow to move to develop and test products to the MASH standards. Using eligibility letters as an indicator, as of March 2016, there are currently only two guardrail end terminals with eligibility letters that have been tested using the MASH standards, compared to the13 guardrail end terminals tested to NCHRP Report 350 with eligibility letters. In the category of longitudinal barriers, there were only 17 MASH-compliant eligibility letters among the 348 active eligibility letters. In an open letter to AASHTO, the American Traffic Safety Services Association, an association representing highway safety industries, expressed concern with the ability for industry to have enough hardware that meets the MASH crash-test standards by the transitions dates, as well as the ability of states to approve new hardware and for FHWA to post new eligibility letters in a timely manner. FHWA officials told us that states and manufacturers have responded positively to the new deadlines. However, FHWA officials did express some concern as to whether states will be able to fully implement MASH standards by the dates in the 2016 Joint Implementation Plan. Their concerns included the need for the market to react in a timely manner and have enough products available to support competition, and to invest in testing categories of roadside safety hardware that have had little testing to MASH standards to this point. FHWA officials also told us that as industry reacts to the dates, FHWA will likely have an influx of requests to review eligibility letters; FHWA officials told us that they already have a backlog of eligibility letter applications since FHWA stopped issuing eligibility letters for modifications to hardware tested to non-MASH standards at the end of 2015. Federal standards for internal control highlight the need for agencies to obtain information needed to achieve their objectives from external parties, including significant matters related to risks. FHWA officials stated they will be in a better position in a year to say whether states are likely to be able to successfully transition to MASH crash-test standards by the dates specified in the January 2016 Joint Implementation Plan. However, FHWA has not developed a plan to track progress of the states and industry in meeting the new dates. Moreover, we found that FHWA and states currently do not collect information that would assist in monitoring the transition to MASH standards. For example, as discussed in the following section, FHWA can interact with developers and crash test labs during the test process, but FHWA does not collect information from developers and labs to be informed when hardware that was previously tested to older standards is re-tested to MASH and fails. Without this information on test failures, FHWA and states may be unaware of setbacks to the transition. Also, if states do not have this information, it may result in the states unknowingly installing failed hardware during the transition period. In addition, 12 states responded to our survey that they currently do not require developers to notify them of modifications made to an approved device. While FHWA requests such notification, 3 of the 12 states do not have eligibility letters for all approved devices and could be unaware of design changes. Federal standards for internal control also highlight the need for agencies to provide quality information to external parties, including the general public to help achieve agency objectives. Monitoring and reporting industry and state progress to the goal dates set in the 2016 Joint Implementation Plan would allow FHWA to keep decision makers in both DOT and Congress aware of progress. Such monitoring and reporting of progress would also position FHWA to take corrective actions as needed to better assure that states and industry are successfully moving to meeting improved standards. FHWA contracted in May 2015 with DOT’s Volpe National Transportation Systems Center to conduct a full review of its roadside safety hardware oversight process and expects a report with recommendations for potential changes to its oversight program in summer 2016. Officials stated the review will include a full examination of the process by which roadside safety hardware is developed, evaluated, funded, and assessed, as well as recommendations for any improvements needed. Specifically, the report will include: documentation of existing laws, regulations, policies, standards, and guidelines associated with the roadside safety hardware process; documentation and review of all the steps in FHWA’s current crash- testing evaluation process; and findings and recommendations to FHWA to improve its oversight. FHWA officials told us that there may be ways to improve the agency’s oversight of roadside safety hardware and that everything in the process, from the partnership relationship with AASHTO to the eligibility letter process, will be included in the review. During the course of our review, FHWA implemented some changes to its program, such as clarifying the need for any modifications to hardware with eligibility letters to be reevaluated, but FHWA officials stated they were holding off on major changes to the current oversight program until the Volpe National Transportation Systems Center’s review is complete. At all nine U.S. labs accredited to conduct crash testing of roadside safety hardware for FHWA review, laboratory crash testing was well documented and thorough in terms of consistency in documentation and test procedures across labs. As part of the crash-testing process, labs and test sponsors have discretion in making testing decisions in several important areas. In addition, there is an inherent potential threat to independence in the testing process because employees in some labs can test devices that were developed within their parent organization. The independence requirement in the standards used to accredit labs is general, and we found varying interpretations and differences in approaches for mitigating threats to independence across the labs. FHWA does not require third party verification of crash testing and does not make its own pass/fail determinations or provide for independent pass/fail determinations for test results. FHWA also does not provide additional guidance to labs and accrediting bodies on independence mitigation measures for crash testing roadside safety hardware. We found that some other federal agencies with similar testing programs have more measures than FHWA has to mitigate potential risks to independence. FHWA requires that crash test labs conducting testing for the purposes of FHWA eligibility letters be accredited to International Organization for Standardization (ISO) 17025 standards, which contain management and technical requirements for labs to be deemed competent to run laboratory testing. There are nine crash test labs in the United States that are accredited to these standards and conduct crash testing for the purposes of FHWA eligibility letters. Our review of the nine accredited U.S. labs found that individual crash tests were well documented and thorough because test reports contained documentation that would allow a third party to understand how the lab conducted the test and how the test results were interpreted. To evaluate the thoroughness and documentation for labs’ crash testing, we created both interview questions and a document request list for all the labs based on international accreditation requirements as well as the crash-testing guidelines in MASH. All nine example test reports we reviewed clearly identified the test standard and the test level the lab used to test the roadside safety hardware device. The pass/fail criteria being used to evaluate the roadside safety hardware device was clearly identified, and all reports described the test results against each of the evaluation criteria. In addition, all reports described the setup of the device and pre- test procedures, which could include verifying the integrity of the soil, when applicable, and structural integrity of the test vehicle. Each report also included between 20 and approximately 100 pictures of the testing process, along with a description of the results. For more information on the documents we requested and reviewed, see appendix I. Labs generally described using requirements specified in test or accreditation standards as the basis for their procedures. Labs described sending equipment out to a qualified calibration laboratory, or obtaining additional expertise and certification to calibrate their own equipment. Several labs also stated that they keep the test objects on site for a period of time, in case follow-ups were needed. Specifically, five labs told us that they kept test documentation on file for at least 2 years, and in three of these cases, kept records indefinitely. Labs also described going beyond what standards require in certain instances. For example, five of the nine labs described using additional cameras or data recording devices to better capture data that would be useful to industry research or to the customer. Accrediting bodies are expected to use the ISO 17025 standards, along with test standards specific to the industry, such as the MASH crash-test standards in this case, as the basis for accrediting crash test labs. Officials from three labs said they had developed documentation practices specifically in reference to the accreditation process. Each accreditation body said that it conducts routine audits each year as part of its accreditation cycle, where accreditation bodies told us they evaluate such aspects of the testing process as setup, equipment calibration, competence of personnel, documentation, and record keeping. One lab reported that its accrediting body assisted it with improved lab procedures by setting up calibration procedures; two labs reported that accreditation requirements guide their policies on document retention. In addition, accreditation standards require labs to collaborate and compare results in inter-laboratory collaborations, a procedure that labs do via Task Force 13, in order to work toward greater consistency in test procedures and results interpretation. Although individual crash tests are well documented, full-scale crash testing to evaluate the performance of an individual piece of hardware is a complex process that requires labs to use professional judgment when deciding which tests need to be run, and how to interpret the results. Both NCHRP Report 350 and MASH have a suite of tests in order to cover a range of crash speeds, angles, and size and weight of vehicles to assess the performance of the roadside safety hardware device. A majority of labs (five of nine) reported that they usually recommend to test sponsors that they run the full suite of tests outlined in the test standard. However, for modified devices that have previously been tested, there is some discretion on which tests to run. Because an individual full-scale crash test can cost about $55,000 (according to a crash test lab we spoke with) it is advantageous to only run what test sponsors think are the most critical tests for a given device. For example, in one of the testing scenarios we reviewed, the lab engineers determined based on prior testing with a larger vehicle that the MASH test for small cars would not be necessary for the tested device. Although this decision is documented in the test report, the reasoning is not detailed, and so it is hard for a reviewer to evaluate this decision. Of the nine labs we interviewed, four told us that they frequently consult with FHWA in the test-planning process, and that these labs generally run the tests in agreement with FHWA. The other labs told us they either rarely or never consult with FHWA; these labs encourage test sponsors to communicate directly with FHWA if they plan to seek an eligibility letter, and in these cases the lab runs the tests the sponsor requests. As part of the eligibility letter process, labs can, but are not required to, consult with FHWA for advice on which tests to run. Labs have some discretion in interpreting the test results against the pass/fail criteria of the crash test standard. According to MASH crash test standards, some interpretation will be necessary for the criteria due to the “very complex nature of vehicular collisions and the dynamic responses of an occupant to the collision, as well as human tolerances to impact.” Eight of the nine labs reported that engineering judgment was necessary to make a pass/fail decision in at least a small minority of tests, although one lab reported that up to 30 percent of all NCHRP Report 350 or MASH crash tests require professional judgement. One lab noted that MASH has more specific criteria than NCHRP Report 350, but leaves room for interpretation when it comes to defining failure limits for penetration, occupant intrusion, and deformation limits, which are all part of the occupant risk criteria. For example, six of the labs reported that occupant intrusion standards were the main subjective parameter, because characteristics such as the amount, type, and location of the intrusion were important in determining whether the occupant could be harmed. Lab officials told us that MASH standards, which specify maximum allowable levels of occupant intrusion, do not always address applied testing scenarios. For example, officials in one lab described a test on a post that sliced and made holes in the floor pan of the test vehicle. The lab officials said they interpreted this to be a failure, although lab officials suggested that MASH crash test standards do not specify whether holes in the floor of the vehicle mean the test fails. In the roadside safety hardware-testing community there is an inherent potential threat to lab independence because there is often not a formal separation between design and testing roles within a lab’s parent organization. Specifically, six of the nine crash test labs we reviewed can test products that were developed by employees of the same parent organization. Two manufacturer-owned labs can test products created by another division of the same company; three university-run labs can test university employees’ designed products; and a state-based facility tests products designed by the state department of transportation. In order to be an accredited lab, the ISO requires labs to identify any conflicts of interest and have policies to ensure labs are free from undue pressure. The three accrediting bodies we interviewed told us that the ISO requirements are usually met by documented conflict-of-interest policies and by having an organizational structure in which lab employees do not have conflicting lines of reporting to their parent organization. However, documentation we obtained and interviews we conducted with labs and accreditation bodies revealed varying interpretations of the level of involvement of the device designer in the crash-testing process, and of the level of involvement of the lab in providing design feedback based on crash test results, that is appropriate to ensure independence. For instance, while four labs told us they would offer advice on how to re- design the device if it failed a crash test, the other five labs said they did not make such recommendations, and one specifically said it interprets ISO standards to mean that labs should not be involved in making design recommendations after testing. The ISO standards are intended to broadly cover testing and calibration laboratories across many industries to ensure technical competence. Varying interpretations suggest a lack of specificity in ISO requirements to ensure independence in the testing of roadside safety hardware. Of the six labs that test devices developed within their parent organization, two labs told us they have policies that formally separate the role of the designer and the tester, although only one had this policy documented. One of the two labs designates an independent approving authority to make the final determination of whether the hardware passed or failed each test and specifies that this person could not have been part of the design or development of the hardware. The other told us that if a member of the lab was involved in the design of a device, that person would not be allowed to make the pass/fail determination. However, the other four labs do not have a separation that is this clear. Two of these labs provided us with the general conflict of interest policies of their parent organizations, and two labs pointed us to conflict-of-interest policies in their quality manuals, which did not have information about separating design and testing. The Committee of Sponsoring Organizations of the Treadway Commission (COSO), a joint initiative of multiple private-sector-accounting organizations, publishes the Internal Control-Integrated Framework to help organizations design internal controls to achieve their objectives. This framework highlights the importance of the separation of duties within an organization, to reduce the risk of inappropriate conduct in the pursuit of objectives. The standard states that when selecting and developing control activities, management should consider whether duties are divided or segregated among different people to reduce the risk of error or inappropriate or fraudulent actions. Labs that do not have this formal separation between design and testing functions could have threats to the independence of their test analyses. One of the three accrediting bodies told us that independence can be difficult to assess because it is not clear what labs that are affiliated with manufacturers, for instance, must do to mitigate any conflicts of interest. Officials from two accrediting bodies told us that other federal agencies provide them with additional guidance on independence and technical expertise, respectively, and one accrediting body told us that it is preferable when an agency provides guidance so that the accrediting body can better apply standards when accrediting labs in a specific industry. For example, officials from this accrediting body provided an example of a federal agency that has developed more specific ethics and integrity requirements than the ISO. Accrediting body officials told us that this agency requires the accrediting body to assess the labs to these more specific requirements. Federal standards for internal controls state that agencies should establish policies and procedures to respond to risks as part of their internal control system. However, apart from the accreditation requirement, FHWA does not have other mitigation measures in place with regard to lab independence. FHWA does not provide guidance to crash test labs or accrediting bodies on mitigating the risks posed by threats to independence. Providing such guidance could provide greater assurance that crash-testing is being performed in an independent, unbiased fashion. FHWA reviews crash test results as part of its eligibility letter process; however, FHWA does not have a process for formally verifying the testing outcomes and making or providing for an independent pass/fail determination. FHWA relies heavily on the labs to determine whether the crash test outcome results in a pass or fail determination for roadside safety hardware. According to ISO standards for accreditation, when a lab states whether a product complies with requirements, it is offering an opinion, and it must be marked as such. Officials from one accrediting body said it would be preferable for labs to provide only the crash-test result data and have a third party apply criteria in MASH crash test standards and make the pass/fail determinations. Officials at one lab we spoke to added that they prefer not to make pass/fail determinations, but they do so for each test. In the eligibility letter process, FHWA requires that lab personnel apply the results to relevant crash-test standards and make a pass/fail determination of the test results. FHWA officials explained that as part of their eligibility letter-review process, they examine the crash test lab report, including pictures, videos, and the test data summary sheets. If FHWA officials have questions, they will contact the lab or developer. However, eligibility letters state that FHWA is relying on the assessment of the lab. Moreover, we reviewed 10 case files for eligibility letters issued between 2005 and 2015 and found that documentation was not sufficient to determine the rationale behind FHWA’s decision to issue these letters. For more information on our review of FHWA’s eligibility letter process, see appendix II. FHWA officials acknowledged that lab employees’ testing devices that were developed within their greater parent organization poses the appearance of an independence threat. In May 2015, FHWA issued a memo directing the developer and test labs to submit financial conflict-of- interest information in order for a developer to receive an eligibility letter. FHWA officials told us that this information will not influence a device’s ability to receive an eligibility letter, but that the information could be published along with the final eligibility letter for the public to review, in an effort to increase transparency. FHWA officials also told us that this was an immediate change they could make but that they are awaiting the results of the Volpe National Transportation Systems Center’s review before deciding whether to take additional steps in this area. According to federal internal control standards, agencies should ensure that they communicate quality information to external parties so they can help the agency achieve its objectives and address related risks. However, as explained above, there is a potential threat to independence in the lab crash-testing environment for roadside safety hardware. In other test settings we found that federal agencies require third party verification of test results or independent entities to make pass/fail determinations. We found that both the Environmental Protection Agency (EPA)—in its ENERGY STAR Program—and the National Highway Traffic Safety Administration (NHTSA)—in its testing for Federal Motor Vehicle Safety Standards and the New Car Assessment Program—have stricter oversight over the lab-testing process and require third party certification and/or verification testing. EPA’s ENERGY STAR Program is a voluntary program to identify and promote energy-efficient products and buildings. Lab testing of products is conducted to determine whether a product meets program specifications for efficiency. As we’ve previously found, the testing requirements for EPA’s ENERGY STAR program have evolved in response to weaknesses identified in the program by us in 2007 and EPA’s Office of Inspector General in 2008, including a lack of assurance that tested products met the qualification criteria. In response to these findings, EPA and the Department of Energy signed a memorandum of understanding in 2009 to propose several program enhancements. As part of a review of this program, before these changes had been implemented, GAO submitted fictitious products for certification and found that the program was vulnerable to fraud and abuse because manufacturers could self-certify that their devices met energy standards without third-party verification. In 2011, we found that EPA had made considerable progress in addressing these issues by including verification testing and third party certification in the approval process. Currently, in order to earn an ENERGY STAR label, products must be tested by EPA-recognized laboratories, and a subset of products is verified annually by third-party certification entities. EPA standards require labs, their accrediting bodies, and third-party certification body laboratories that verify test results to abide by respective sets of conditions and criteria in order to be recognized by the ENERGY STAR Program. EPA also has an application process for all three types of entities to receive ENERGY STAR Program recognition. Under this process, EPA requires that labs test products for review by third-party certification bodies, which determine if the devices meet the program standards for a product to carry an ENERGY STAR label. EPA’s standards for ENERGY STAR recognition also require certification bodies to verify lab test data and make a pass/fail determination, and EPA officials added that the labs themselves are not supposed to make this decision. In addition, certification bodies are to conduct verification testing for a sampling of devices, including off-the-shelf devices, across multiple categories each year. EPA officials told us they closely oversee the certification bodies through frequent communication and periodic audits. We also interviewed officials at NHTSA regarding two forms of vehicle crash testing that they oversee. First, NHTSA issues federal motor vehicle safety standards (FMVSSs), with which vehicle manufacturers must comply, and manufacturers must self-certify that their products meet these standards. NHTSA then sponsors verification testing of some vehicle models, where they purchase vehicles from dealer lots and subject them to crash testing to confirm the manufacturer’s certification. The verification testing is conducted by labs selected by NHTSA. NHTSA officials told us that they would not select a lab that has the potential conflict of interest of being a part of a manufacturer business. Once the testing is conducted, the raw data is sent to NHTSA and NHTSA officials make the determination as to whether the vehicle has met the FMVSSs. Testing for NHTSA’s New Car Assessment Program is similar to that of the FMVSSs in design, but rather than checking whether vehicles meet minimum safety standards, NHTSA awards vehicle models with up to 5 stars for their safety performance in crash testing to standards that, according to NHTSA officials, typically exceed those in the FMVSSs. Officials stated that because these standards are not federal requirements, vehicle manufacturers do not have to comply or self-certify compliance. Officials noted, however, that because the New Car Assessment Program provides safety ratings information to consumers the manufacturers have an incentive to receive the highest safety rating possible. Similar to testing under the FMVSSs, officials said that NHTSA purchases vehicles from dealer lots and then tests them at selected labs. The crash-test data is then sent to NHTSA where NHTSA officials determine the star rating for each test vehicle. In contrast to these programs, FHWA does not require either third party certification or verification of crash testing; nor does FHWA provide additional guidance on independence mitigation measures for crash testing roadside safety hardware. Establishing a process for third-party verification of crash test results could provide greater assurance that threats to independence are fully addressed. FHWA officials told us that they would favor considering some form of third party review over crash test results. These officials added that having FHWA conduct the third party review could be challenging and that FHWA would need to assess the resources, technical capacity, and legal capacity to perform that role. According to FHWA and AASHTO-sponsored research, in-service performance evaluations (ISPE) are recommended for effective roadside safety hardware oversight because real-world crash conditions, such as vehicle characteristics, as well as the terrain of the roadway, may vary widely from those experienced in crash testing. Moreover, crash testing cannot fully replicate the effects of installation conditions over time on roadside safety hardware’s performance. In establishing a methodology for conducting ISPEs, NCHRP Report 490 states that collecting crash data over multiple years and examining crash sites in real-time can enable researchers to report more information on roadside safety hardware’s installation and maintenance issues, the costs associated with making repairs to damaged hardware, and the severity of injuries resulting from crashes that involve roadside safety hardware. This can better equip states to make cost-benefit determinations regarding roadside safety hardware replacement or new product development. ISPEs can also inform whether crash-testing standards are appropriately suited to assessing the effectiveness of roadside safety hardware. Based on our review of studies published since 1993, when FHWA recognized NCHRP Report 350 testing standards, few formal ISPEs of roadside safety hardware have been conducted to fully assess the performance of roadside safety hardware in actual conditions. After reviewing government, industry, and academic sources, we found 14 formal ISPEs that were published since 1993. While other studies included elements of an in-service performance evaluation, 14 studies in our review combined crash data analysis with real-time visits to crash sites to document and assess the damage, which is a key characteristic of a formal ISPE as defined by NCHRP Report 490. Additionally, these ISPEs tended to focus on longitudinal barriers, such as guardrails and cable barriers, and barrier terminals, such as guardrail end terminals, while other types of roadside safety hardware were generally not the subject of ISPEs. A key challenge to states conducting ISPEs appears to be the lack of fully-developed data. As NCHRP Report 490 indicates, having inventory data on the number of roadside safety hardware devices being studied and their location is critical to calculating rates of collision with roadside safety hardware within the study area. In our survey of state DOTs, we asked officials to describe their data and inventory efforts, and states reported a general lack of established inventory data. As table 2 shows, a majority of states indicated that they have inventory data-collection efforts for barrier terminals/crash cushions, for example, but many of these efforts are new or are ongoing and therefore are not fully established. For instance, of the 29 states that reported in response to our survey that they have inventory data-collection efforts for barrier terminals/crash cushions, 18 said that their efforts are ongoing, and 12 of these said that they had only been collecting data since 2014. State DOT officials we interviewed in four states also told us that inventory data they collect may not include information on condition or location of roadside safety hardware, which as NCHRP Report 490 notes is necessary for a full understanding of performance. The other key piece of data is crash data. The current state of crash data reporting may not facilitate conducting ISPEs of roadside safety hardware. According to NCHRP Report 490, police will likely not comment as part of their crash records on factors like soil conditions, which could influence how guardrail posts, for instance, function in a crash. Police crash records also do not capture any unreported collisions and may not consistently document the type of roadside safety hardware involved in an accident. According to our survey, only 6 of 44 states that responded said that they had conducted any formal ISPEs in the last 10 years. State officials we interviewed also described less formal efforts to evaluate roadside safety hardware’s performance. For instance, officials in one state told us that they perform a trial run for any new proprietary roadside safety hardware device in a sampling of locations and monitor on-site the in-service performance for 12–18 months prior to approving the device to be used by contractors across the state. However, state officials told us this effort is not published in a report. Without published results that document a methodology that others can repeat, however, such results do not ultimately add to the broader knowledge base of ISPEs. Officials in four of the five states we interviewed indicated that they have cost and/or data constraints related to collecting the necessary data to conduct formal ISPEs. Officials from the fifth state we interviewed described a software application they developed to inventory all of the guardrail end terminals in their state. According to state officials, local maintenance crews in the state use a custom web application on a mobile device to record the total number, along with data on the type and location, of guardrail end terminals in their state. This data is then uploaded to a central database. State officials said they were planning to make this a long-term project and apply it to other types of roadside safety hardware. Officials noted that they are still in the process of adding the capability for keeping the data up to date. These officials also told us that the application was relatively inexpensive to develop, and FHWA officials noted that at least one state was interested in learning more about the application. State officials told us that as of yet, however, this technology has not been shared across states. FHWA has ongoing research to identify best practices for the collection of data on roadside safety hardware. However, this research is limited to guardrail end terminals, and the planned scope of work may not be sufficient to fill the gaps created by the lack of ISPE literature at the state level. FHWA officials told us that in the summer of 2015, FHWA began a pilot study on the collection of data on guardrail end terminals’ performance. According to FHWA officials, the first phase of this pilot study is expected to last through the end of 2016. Officials plan to identify current challenges to conducting ISPEs as well as recommend best practices for: 1) the collection of real-time data on crashes involving roadside safety hardware; 2) interagency communication at the state level regarding crash reporting; and 3) data management regarding hardware maintenance and location. FHWA is currently collecting inventory and crash data in four states (Missouri, Pennsylvania, Massachusetts, and California) that have agreed to participate in this pilot. FHWA officials stated that within a selected area of each state, data will be collected by examining crash sites for six different models of guardrail end terminals. This data could produce information needed to assess the performance of the devices with respect to the risk of severe occupant injury if the study were to be continued. According to FHWA officials, crash specialists from NHTSA, the agency that collects and reports data on fatal crashes for DOT, will conduct detailed on-site investigations for fatal and serious injury crashes, generally within 24 hours of receiving notification of the crash. Data on crashes resulting in property damage only and other minor crashes will also be collected. Officials told us that they plan to continue collecting data through 2016 for this phase of the project. According to FHWA officials, however, publishing findings on the effectiveness of guardrail end terminals’ performance is not part of their current efforts because they first want to provide guidance to states on best practices for performance data collection. Officials noted that the time frame for the current phase of the pilot would be insufficient to collect enough data for statistically significant findings. FHWA officials told us that they will not determine whether to include performance findings as part of future phases of the pilot study until this phase is complete at the end of 2016. As noted previously, FHWA’s Office of Safety includes in its mission the need to advance the use of scientific methods and data-driven decisions in highway policy. The current lack of in-service performance findings and established inventory data for roadside safety hardware poses challenges to states making data-driven decisions about highway maintenance. FHWA officials told us they currently have no plans to include additional ISPEs for other types of roadside safety hardware as part of their broader highway-safety research portfolio. Officials cited cost concerns with gathering data and explained that ISPEs would take on greater relevance in the future as more MASH-compliant devices are installed on roadways. However, continuing this study and reporting on the performance of guardrail end terminals, or planning to make ISPEs part of other future research, could add to the limited body of knowledge regarding the in- service performance of roadside safety hardware. FHWA officials also noted that hardware that was installed could still be on the roadways for 20–30 years. ISPEs on current devices can therefore still provide states with critical information regarding how they might prioritize maintenance tasks—such as replacing older devices—to best ensure safety for their motorists. Without robust, ongoing in-service performance evaluations, less safe hardware may remain in use longer than is necessary. FHWA’s cooperation with AASHTO and state DOTs has resulted in states having policies to install crash-tested roadside safety hardware on the NHS. However, challenges exist for states, industry and FHWA as the improved MASH crash-testing standards are phased in over the next few years. These changes will require cooperation and action from industry, the states, and FHWA. FHWA has the opportunity to exercise more robust oversight to ensure greater consistency in the implementation of improved crash test standards. First, FHWA, through its division offices’ oversight of states’ standards and design specifications, can help ensure that states have written policies in place that fully reflect the terms of the 2016 state-approved Joint Implementation Plan to address inconsistent practices across states. Second, monitoring and reporting the states’ and industry’s progress transitioning to the MASH crash test standards, as federal standards for internal controls suggest, and making this information available to Congress and the public would facilitate transparency and position FHWA to consider midcourse corrections if required. FHWA can also take steps to strengthen its role in the assessment of roadside safety hardware performance—both in the test lab and once installed on the roadways. Because FHWA’s current oversight process does not include verification of lab crash-test results and no specific mitigation measures are in place to address potential threats to independence, the risks to ensuring the integrity of the crash-testing process remain unaddressed. Other agencies have introduced policies or processes into the testing process that mitigate these types of issues; FHWA could take similar actions. In addition, other agency practices provide a model for FHWA of closer cooperation with the labs and accreditation bodies to address the independence issues unique to roadside safety hardware’s testing. FHWA also has the opportunity to advance its mission in the scientific evaluation of roadside safety hardware. FHWA has a pilot project underway that is examining data collection practices for in-service performance evaluations but currently has no plans to report on performance findings from either this study or other research in its portfolio. Continuing this study or planning to make ISPEs part of future research could add to what is currently a limited body of knowledge regarding the in-service performance of roadside safety hardware. FHWA is poised to consider changes to its approach to roadside safety hardware through a full programmatic review to be completed in the summer of 2016. Opportunities exist to address all these issues and to provide states, industry, and the traveling public greater assurance that FHWA is fulfilling its safety mission and advancing roadside safety. To promote the transition to improved crash test standards, to strengthen FHWA’s oversight of the roadside safety hardware’s crash-testing process, and to make more information available to states and industry on how roadside safety hardware performs in actual conditions, we recommend that the Secretary of Transportation direct the Administrator of FHWA to take the following five actions: 1. Direct FHWA’s division offices to help ensure, through their oversight of states’ standards and design specifications, that states have written policies in place to require the installation of appropriately crash- tested roadside safety hardware on the NHS to address inconsistent practices across states. 2. Monitor and periodically report to Congress (or report through the agency’s publicly available website) progress states and the industry are making in transitioning to the MASH crash-testing standards for roadside safety hardware. 3. Provide additional guidance to crash test labs and accreditation bodies to ensure that labs have a clear separation between device development and testing in cases where lab employees test devices that were developed within their parent organization. 4. Develop a process for third-party verification of results from crash-test labs. 5. Support additional research and disseminate results on roadside safety hardware’s in-service performance, either as part of future phases of FHWA’s current pilot study on guardrail end terminals’ performance or as part of FHWA’s broader research portfolio. We provided a copy of a draft of this report to the Department of Transportation for review and comment. In written comments, reproduced in appendix III, DOT concurred with GAO’s recommendations. FHWA also provided technical comments which we incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees and the Secretary of Transportation. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or Flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This report addresses: (1) how FHWA performs oversight of state policies and practices related to roadside safety hardware; (2) the thoroughness of the crash-testing process and FHWA’s oversight of this process; and (3) the extent to which information is available on roadside safety hardware performance once installed. To assess FHWA’s role in the oversight of roadside safety hardware and related state policies, we reviewed FHWA documentation including: internal memos on roadside safety and implementation of the current eligibility letter process; guidance memos to the broader roadside safety hardware community; and policy documents such as the template for stewardship and oversight agreements agreed to with states. We also reviewed relevant laws and regulations governing FHWA’s oversight of roadside safety hardware. We interviewed FHWA officials in headquarters to understand how policies and practices are carried out. We also applied federal internal control standards for monitoring, designing control activities and communication with external stakeholders when reporting on the agency’s policies and practices for conducting oversight of roadside safety hardware. To better understand FHWA’s process in reviewing crash test information and issuing eligibility letters, we requested and received the FHWA files for 10 eligibility letters. Two case files were selected by FHWA as example files; we accepted these files and then selected eight more files from the roughly 1,000 available. To make our selection, we first limited the pool to only applications that came to FHWA since 2005. Then we wanted variation in terms of the following variables: age of application (variation across those 10 years); new device versus modification to existing device; type of device; type of standard tested to (MASH or NCHRP Report 350); and proprietary versus generic. Once the files were received we reviewed each file to determine whether it had the information we would expect in order for a third party to understand how FHWA officials came to the conclusion to issue an eligibility letter. To better understand states’ roles in the oversight of roadside safety hardware, we developed and distributed a survey to all 50 states, plus the District of Columbia and Puerto Rico. Survey questions addressed topics including policies on crash testing of roadside safety hardware, procedures for ensuring that only crash-tested hardware is installed on the national highway system, efforts to collect inventory data on roadside safety hardware, and what, if any, research states had conducted in the last 10 years to evaluate the in-service performance of roadside safety hardware after it is installed. After developing the survey, we conducted four pre-test interviews with selected states to ensure that the questions were clear and appropriate for our research objectives. We adjusted the survey questions as needed in response to feedback prior to survey distribution. In October 2015 we distributed the survey to state departments of transportation representatives from all 50 states, plus the District of Columbia and Puerto Rico. We followed up and collected responses until January 2016, at which point we had received responses from 44 out of 52 states and territories. In instances where states did not supply complete responses to individual questions the answers to those questions were not included in the survey results, and those states were removed from the denominator for purposes of summary analysis. For selected questions, we conducted brief follow-up interviews and solicited written responses, when appropriate, in order to seek clarification or elaboration of states’ responses. To get more information on how states oversee roadside safety hardware, we selected five states—Maryland, Virginia, Ohio, Texas, and California—with which to conduct interviews with state departments of transportation officials and the FHWA’s division offices that oversee the state departments of transportation. We selected these states based on the presence of an accredited crash-testing facility in the state and recommendations from stakeholders regarding the quality of performance-data collection efforts in those states. In the cases of Virginia, Ohio, and Texas, interviews with state officials were conducted on site, and we also conducted interviews with crash-testing lab personnel and roadside safety hardware developers in the cases of Ohio and Texas. To gain information on the thoroughness and independence of the crash- testing process and the extent to which FHWA oversight helps ensure this, we interviewed the nine domestic crash labs that are accredited, as required by FHWA, to international crash test lab standards to test roadside safety hardware for FHWA eligibility letters. To describe how labs are evaluated against international-testing standards, we reviewed the international-test lab accreditation standards in ISO 17025 and interviewed the three accrediting bodies that accredit the nine domestic crash-testing labs. To evaluate the thoroughness and documentation for lab crash testing, we reviewed the accreditation requirements in ISO 17025 as well as the crash-testing guidelines in MASH, and analyzed these documents to create both interview questions and a document request list for all the labs, in consultation with our technologist. The questions addressed how labs ensure the quality of the testing environment, how they interpret test results, how they document each test, how they comply with conflict of interest requirements in the ISO, and how communicate with FHWA. We also asked labs to submit their accreditation reports, quality manual, any relevant conflict-of-interest or ethics policies, and a sample test report for us to review. We then asked the nine labs to walk us through a recent example of a product tested for FHWA compliance, in order to describe how policies and requirements are implemented in practice. We reviewed the conflict-of-interest policies to determine the extent to which there was variation in policies across the labs, and to evaluate whether there were mitigation measures for potential threats to independence. We also visited four crash test labs and witnessed two full scale crash tests to gain a better understanding of the crash testing process. To collect information on how other agencies oversee lab testing, we reviewed documentation and interviewed officials from the Environmental Protection Agency’s ENERGY STAR program, as well as the National Highway Traffic Safety Administration regarding their vehicle crash testing. Both agencies were referenced in our discussions with accrediting bodies as examples of other agencies that oversee accredited testing programs. To assess the extent of information available on roadside safety hardware performance once devices are installed, we conducted a literature search for in-service performance evaluations (ISPE) using government, academic, and trade publication sources. We also reviewed studies submitted to us by a highway design and roadside safety hardware engineering expert. For both sources of studies, we used the National Cooperative Highway Research Program (NCHRP) Report 490’s definition of an ISPE to define criteria for determining whether the studies we reviewed constituted ISPEs for the purposes of our report. Specifically, we looked for studies that combined analysis of crash data with real-time site visits. According to NCHRP Report 490, studies that retroactively or contemporaneously examine crash data are known as historical studies and collision studies, respectively, whereas an ISPE adds the element of real-time crash site analysis. NCHRP Report 490 notes that this technique allows researchers to better determine what type of hardware was struck, whether there were installation techniques or other site-specific characteristics that contributed to the crash, and whether the exact device is something a state DOT still uses. We also stipulated that the study in question involve a specific type of roadside safety hardware, which we defined according to FHWA’s categories of hardware for purposes of federal-aid eligibility letters. Moreover, we restricted our ISPE classification to studies published between 1993 and 2015, when NCHRP Report 350 crash-testing standards were published and when FHWA first recognized them. As part of our literature search, we used online search terms that tailored the searches to specific types of roadside safety hardware as well as key methodological components, such as site visits. To inform all of the research questions we also reviewed documentation and interviewed relevant officials from interested stakeholders. We reviewed standards and relevant guidance from the American Association of State Highway and Transportation Officials (AASHTO). To collect information on how crash test standards are developed and are updated, we interviewed the AASHTO Technical Committee on Roadside Safety, as well as officials at the Transportation Research Board’s National Cooperative Highway Research program. To get more detailed perspective on how industry, states, and crash-testing facilities collaborate, we attended a semiannual meeting of Task Force 13, a joint committee of AASHTO, the Associated General Contractors of America, and the American Road and Transportation Builders Association, which develops standards and specifications for bridges and roadside safety hardware. We also interviewed two roadside safety hardware developers. We conducted this performance audit from April 2015 to June 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We reviewed 10 case files for eligibility letters issued between 2005 and 2015 and found that documentation was not sufficient to determine the rationale behind FHWA’s decision to issue these letters. FHWA officials explained that as part of their eligibility letter review process, they examine the crash test lab report, including pictures, videos, and the test- data summary sheets. If FHWA officials have questions they will contact the lab or developer. However, there is no structured protocol for documenting the steps FHWA reviewers took or the rationale behind the decision to issue an eligibility letter. For example, in two cases where FHWA officials asked questions and received answers from the lab or test sponsor, it was not possible to trace how FHWA made its determination to issue an eligibility letter. We also found three instances in which the full suite of testing was not performed, but no documentation was present explaining why the lack of testing was acceptable. We provided this information to FHWA officials who acknowledged that the basis for those decisions was not documented but stated that in each case FHWA found the information and reasoning provided by the lab or test sponsor satisfactory. FHWA officials also told us they made changes to the eligibility letter review process in 2015 including documenting communications with developers seeking an eligibility letter, a checklist for documenting reviews of eligibility letter requests, and updates to the eligibility letter request form to identify tests that are not critical or not relevant and the reasons why or why not. FHWA officials told us that this checklist provides documentation from the submitter concerning why certain tests were not conducted or why modifications are considered non-significant to better document this information. We did not evaluate the impact of these changes since they were made during the course of our audit work. Officials also characterized these changes as “interim” because the eligibility letter review process is part of the ongoing independent Volpe National Transportation Systems Center review of the program. In addition to the contact named above, Steve Cohen (Assistant Director), Melissa Bodeau, Devin Braun, William Egar, Sarah Farkas, Sarah Gilliland, Judy Guilliams-Tapia, David Hooper, Leslie Locke, Madhav Panwar, Malika Rice, Alexandra Squitieri, Jade Winfree, and Elizabeth Wood made key contributions to this report.
In 2014, 54 percent of traffic fatalities in the United States occurred as a result of a vehicle's leaving the roadway, according to U.S. Department of Transportation's (DOT) data. Roadside safety hardware, such as guardrails, is meant to reduce the risk of a serious crash when leaving the roadway. But in the last several years, a number of serious injuries and deaths resulted from crashes into roadside safety hardware. GAO was asked to review FHWA's oversight framework for roadside safety hardware. This report assesses: (1) how FHWA performs oversight of state policies and practices related to roadside safety hardware; (2) the laboratory crash-testing process and FHWA's oversight of this process; and (3) the extent to which information is available on roadside safety hardware's performance once installed. GAO reviewed federal and state policies, surveyed state DOTs and received 44 responses, and reviewed documentation from nine U.S. crash test labs. The Federal Highway Administration (FHWA) oversees and promotes states' installation of crash-tested roadside safety hardware through guidance and policy directives to states and by issuing letters to roadside safety hardware developers that provide states with information on roadside safety hardware that has been crash tested. States that responded to our survey generally stated they require crash testing. However, some inconsistencies across state practices exist, and states' movement to require installation of devices successfully tested to updated, improved crash test standards—in the Manual for Assessing Safety Hardware (MASH)— has been slow. FHWA, in partnership with the American Association of State Highway and Transportation Officials (AASHTO), recently established transition dates to the MASH standards for states, but some challenges exist in developing and approving a sufficient quantity of roadside safety hardware tested to MASH standards. FHWA currently does not have a monitoring plan to report on progress to meeting the established dates; monitoring and reporting would allow FHWA to keep decision makers aware of progress and position FHWA to take corrective actions as needed. In general, laboratory crash testing appears to be well documented and thorough; however, FHWA's oversight of the process does not address potential threats to independence. GAO found that six of the nine accredited U.S. crash test laboratories evaluate products that were developed by employees of the parent organization—a potential threat to lab independence. FHWA reviews crash tests' results and related documentation, if they are submitted for review, but FHWA relies heavily on the labs to make a pass/fail determination. We found that some other federal agencies in oversight of similar labs' testing settings require third party verification of test results or independent entities to make pass/fail determinations. FHWA does not have a process for formally verifying the testing outcomes and making its own or providing for an independent pass/fail determination. Developing a process for third party verification of roadside safety hardware's lab test results could provide greater assurance that potential threats to independence are fully addressed. Little is known about the in-service performance of roadside safety hardware because few evaluations of this performance have been done. FHWA and AASHTO recommend that states and others perform in-service performance evaluations (ISPE) of installed roadside safety hardware because crash testing cannot fully capture real-world crash conditions. However, few ISPEs have been done, in part, because of a lack of inventory and crash data. In the summer of 2015 in four states, FHWA began a pilot study that could provide useful information, but according to FWHA officials, the purpose of this phase of the pilot is to determine best practices on data collection rather than assess performance of roadside safety hardware. FHWA officials told us they currently have no plans to include performance findings as part of future phases of this study or in their broader research portfolio. Continuing this study or planning to make ISPEs part of future research could add to the limited ISPE body of knowledge. GAO is making recommendations, including that DOT monitor and periodically report on the transition to the MASH crash test standards; develop a process for third party verification of crash test results; and support additional research on roadside safety hardware's in-service performance. DOT concurred with the recommendations and provided technical comments, which were incorporated in the report, as appropriate.
We identified 11 new areas in which we found evidence of fragmentation, overlap, or duplication and present 19 actions to executive branch agencies and Congress to address these issues. As described in table 1, these areas span a wide range of federal functions or missions. We consider programs or activities to be fragmented when more than one federal agency (or more than one organization within an agency) is involved in the same broad area of national need, which may result in inefficiencies in how the government delivers services. We identified fragmentation in multiple programs we reviewed. For example, the Department of Defense (DOD) does not have a consolidated agency-wide strategy to contract for health care professionals, resulting in a contracting approach that is largely fragmented. Although some of the military departments have attempted to consolidate their health care staffing requirements through joint-use contracts, such contracts only accounted for approximately 8 percent of the $1.14 billion in obligations for health care professionals in fiscal year 2011. Moreover, in May 2013, we identified several instances in which numerous task orders were awarded by a single military department for the same type of health care professional in the same area or facility. For example, we identified 24 separate task orders for contracted medical assistants at the same military treatment facility. By not consolidating its requirements, this facility missed the opportunity to achieve potential cost savings and other efficiencies. To reduce fragmentation and achieve greater efficiencies, DOD should develop a consolidated agency-wide strategy to contract for health care professionals. Fragmentation can also be a harbinger for overlap or duplication. Overlap occurs when multiple agencies or programs have similar goals, engage in similar activities or strategies to achieve them, or target similar beneficiaries. We found overlap among federal programs or initiatives in a variety of areas, such as overlapping benefits between the Disability Insurance and Unemployment Insurance programs. In July 2012, we reported that 117,000 individuals received concurrent cash benefit payments in fiscal year 2010 from the Disability Insurance and Unemployment Insurance programs totaling more than $850 million because current law does not preclude the receipt of overlapping benefits. Individuals may be eligible for benefit payments from both Disability Insurance and Unemployment Insurance due to differences in the eligibility requirements; however, in such cases, the federal government is replacing a portion of lost earnings not once, but twice. The President’s fiscal year 2015 budget submission proposes to eliminate these overlapping benefits, and during the 113th Congress, bills have been introduced in both the House of Representatives and the Senate containing language to reduce Disability Insurance payments to individuals for the months they collect Unemployment Insurance benefits. According to the Congressional Budget Office (CBO), this action could save $1.2 billion over 10 years in the Social Security Disability Insurance program. Congress should consider passing legislation to offset Disability Insurance benefit payments for any Unemployment Insurance benefit payments received in the same period. In other areas of our work, we found evidence of duplication, which occurs when two or more agencies or programs are engaged in the same activities or provide the same services to the same beneficiaries. Examples of duplicative, or potentially duplicative, federal efforts include DOD’s use of dedicated satellite control operations. We reported in April 2013 that DOD has increasingly deployed dedicated satellite control operations networks as opposed to shared networks that support multiple kinds of satellites. For example, at one Air Force base in 2013, eight separate control centers operated 10 satellite programs. Dedicated networks can offer some benefits to programs, but they can also be more costly to maintain and have led to a fragmented, and potentially duplicative, approach that requires more infrastructure and personnel to manage when compared with shared networks. While opportunities exist to improve DOD satellite control operations, we identified certain barriers that hinder DOD’s ability to increase the use of shared networks, such as the inability to quantify all spending on satellite ground control operations and the absence of DOD-wide guidance or a plan that supports the implementation of alternative methods for performing satellite control operations. These barriers also have hindered DOD’s ability to achieve optimal satellite control systems that would result in cost savings in this area. To address the duplication and inefficiencies that arise from dedicated satellite control operations networks, DOD should take actions to improve its ability to identify and then assess the appropriateness of a shared versus dedicated satellite control system. In addition to areas of fragmentation, overlap, and duplication, our 2014 report identified 15 new areas where opportunities exist either to reduce the cost of government operations or to enhance revenue collections for the Treasury and suggest 45 actions that the executive branch and Congress can take to address these issues. These opportunities for executive branch or congressional action exist in a wide range of federal government missions (see table 2). For example, to achieve cost savings, Congress may wish to consider rescinding all or part of the remaining credit subsidy appropriations to the Advanced Technology Vehicles Manufacturing (ATVM) loan program, unless the Department of Energy (DOE) can demonstrate sufficient demand for new ATVM loans and viable applications. We reported in March 2013 that DOE last issued a loan under this program in March 2011 and was not actively considering any applications for the remaining $4.2 billion in credit subsidy appropriations under the ATVM loan program. Also, most applicants and manufacturers we had spoken to indicated that the costs of participating outweigh the benefits to their companies and that problems with other DOE programs have tarnished the ATVM loan program, which may have led to a deficit of applicants. Since our March 2013 report, DOE has received one application seeking approximately $200 million. DOE recently stated that it has begun new outreach efforts to potential applicants that will increase awareness and interest in the program and lead to additional applications in 2014. However, DOE has not further demonstrated a demand for ATVM loans, such as new applications that meet all the program eligibility requirements and involve amounts sufficient to justify retaining the remaining credit subsidy appropriations, nor has it explained how it plans to address challenges cited by previous applicants including a burdensome review process. Determining whether program funds will be used is important, particularly in a constrained fiscal environment, as unused appropriations could be rescinded or directed toward other government priorities. We also identified multiple opportunities for the government to increase revenue collections. In particular, the federal government could increase tax revenue collections by hundreds of millions of dollars over a 5-year period by denying certain privileges or payments to individuals with delinquent federal tax debt. For example, Congress could enable or require the Secretary of State to screen and prevent individuals who owe federal taxes from receiving passports. We found that in fiscal year 2008, passports were issued to about 16 million individuals; of these, over 1 percent collectively owed over $5.8 billion in unpaid federal taxes as of September 30, 2008. According to a 2012 CBO estimate, the federal government can save about $500 million over a 5-year period on the revocation or denial of passports in case of certain federal tax delinquencies. In addition to the new actions identified for this year’s annual report, we have continued to monitor the progress that executive branch agencies and Congress have made in addressing the issues we identified in our last three annual reports. We evaluated progress by determining an overall assessment rating for each area and an individual assessment rating for each action within an area. We found that the executive branch agencies and Congress have generally made progress in addressing the 162 areas we previously identified. As of March 6, 2014, the date we completed our audit work, 19 percent of these areas were addressed, 62 percent were partially addressed, and 15 percent were not addressed (see fig.1). Within these areas, we presented about 380 actions that the executive branch agencies and Congress could take to address the issues identified. As of March 6, 2014, 32 percent of these actions were addressed, 44 percent were partially addressed and 19 percent were not addressed. Congress and executive branch agencies have made progress toward addressing our identified actions, as shown in figure 2. In particular, an additional 58 actions have been assessed as addressed over the past year. These addressed actions include 19 actions identified in 2011, 21 actions identified in 2012, and 18 actions identified in 2013. The following examples illustrate the progress that has been made over the past year: Farm program payments: In our 2011 annual report, we stated that Congress could save up to $5 billion annually by reducing or eliminating direct payments. Direct payments are fixed annual payments to farmers based on a farm’s history of crop production. Farmers received them regardless of whether they grew crops and even in years of record income. The Agricultural Act of 2014 eliminated direct payments and should save approximately $4.9 billion annually from fiscal year 2015 through fiscal year 2023, according to CBO. Passenger aviation security fees: In our 2012 annual report, we presented options for adjusting the Transportation Security Administration’s (TSA) passenger security fee—a uniform fee on passengers of U.S. and foreign air carriers originating at airports in the United States—to offset billions of dollars in civil aviation security costs. The Bipartisan Budget Act of 2013, enacted on December 26, 2013, modifies the passenger security fee from its current per enplanement structure ($2.50 per enplanement with a maximum one- way-trip fee of $5.00) to a structure that increases the passenger security fee to a flat $5.60 per one-way trip, effective July 1, 2014. Pursuant to the act, collections under this modified fee structure will contribute to deficit reduction as well as to offsetting TSA’s aviation security costs. Specifically, the act identifies $12.6 billion in fee collections that, over a 10-year period beginning in fiscal year 2014 and continuing through fiscal year 2023, will contribute to deficit reduction. Fees collected beyond those identified for deficit reduction are available, consistent with existing law, to offset TSA’s aviation security costs. According to the House of Representatives and Senate Committees on the Budget, and notwithstanding amounts dedicated for deficit reduction, collections under the modified fee structure will offset about 43 percent of aviation security costs, compared with the approximately 30 percent currently offset under the existing fee structure. Combat uniforms: In our 2013 annual report, we noted that DOD employed a fragmented approach for acquiring combat uniforms and could improve efficiency, better protect servicemembers, and realize cost savings through increased collaboration among the military services. Over the past year, DOD and Congress addressed all three actions that we identified. In September 2013, DOD developed and issued guidance on joint criteria that will help to ensure that future service-specific uniforms will provide equivalent levels of performance and protection. In December 2013, a provision in the National Defense Authorization Act for Fiscal Year 2014 established as policy that the Secretary of Defense shall eliminate the development and fielding of service-specific combat and camouflage utility uniforms in order to adopt and field common uniforms for specific environments to be used by all members of the armed forces. Subject to certain exceptions, the provision also prohibits the military departments from adopting new pattern designs or uniform fabrics unless they will be adopted by all services or the uniform is already in use by another service. We estimate that executive branch and congressional efforts to address these and other actions from fiscal year 2011 through fiscal year 2013 have resulted in over $10 billion in realized cost savings to date, and projections of these efforts have estimated that billions of dollars more in savings will accrue over the next 10 years. Although Congress and executive branch agencies have made notable progress toward addressing the actions we have identified, further steps are needed to fully address the remaining actions, as shown in table 3. More specifically, over 60 percent of actions directed to Congress and executive branch agencies identified in 2011, 2012, and 2013 remain partially addressed or not addressed. Sustaining momentum and making significant progress on our suggested actions for reducing, eliminating, or better managing fragmentation, overlap, or duplication or achieving other potential financial benefits cannot occur without demonstrated commitment by executive branch leaders and continued oversight by Congress. A number of the issues that we have identified are complex, and implementing many of the actions will take time and sustained leadership. As our work has shown, committed leadership is needed to overcome the many barriers to working across agency boundaries, such as agencies’ concerns about protecting jurisdiction over missions and control over resources or incompatible procedures, processes, data, and computer systems. Without increased or renewed leadership focus, agencies may miss opportunities to improve the efficiency and effectiveness of their programs and save taxpayers’ dollars. As we have previously reported, addressing the issues identified in our annual reports could lead to tens of billions of dollars of savings. Table 4 highlights selected opportunities that could result in cost savings or enhanced revenues. Even with sustained leadership, addressing fragmentation, overlap, and duplication within the federal government is challenging because it may require agencies and Congress to re-examine within and across various mission areas the fundamental structure, operation, funding, and performance of a number of long-standing federal programs or activities with entrenched constituencies. As we have previously reported, these challenges are compounded by a lack of good data. In particular, we have found that the lack of a comprehensive list of federal programs and reliable budget information makes it difficult to identify, assess, and address potential fragmentation, overlap, and duplication. Currently, no comprehensive list of federal programs exists, nor is there a common definition for what constitutes a federal program. We have also reported instances where agencies could not isolate budgetary information for some programs because the data were aggregated at higher levels. For example, in 2012 we reported that agencies were not able to provide complete and reliable federal funding information on many of the 94 nonfederal sector green building initiatives. According to agency officials, many of the initiatives are part of broader programs, and the agencies do not track green building funds separately from the funds for other activities. Without knowing the scope of programs or the full cost of implementing them, it is difficult for executive branch agencies or Congress to gauge the magnitude of the federal commitment to a particular area of activity or the extent to which associated federal programs are effectively and efficiently achieving shared goals. Moreover, the lack of reliable, detailed budget information makes it difficult to estimate the cost savings that could be achieved should Congress or agencies take certain actions to address identified fragmentation, overlap, and duplication. Absent this information, Congress and agencies cannot make fully informed decisions on how federal resources should be allocated and the potential budget trade-offs. In addition, we have called attention to the need for improved and regular performance information. The regular collection and review of performance information, both within and among federal agencies, could help executive branch agencies and Congress determine whether the return on federal investment is adequate and make informed decisions about future resource allocations. However, as we previously noted, our annual reports on fragmentation, overlap, and duplication highlight several instances in which executive branch agencies do not collect necessary performance data. Effective implementation of the framework originally put into place by the Government Performance and Results Act of 1993 (GPRA) and significantly enhanced by the GPRA Modernization Act of 2010 (GPRAMA) could help clarify desired outcomes, address program performance spanning multiple organizations, and facilitate future actions to reduce, eliminate, or better manage fragmentation, overlap, and duplication. In particular, GPRAMA establishes a framework aimed at taking a more crosscutting and integrated approach to focusing on results and improving government performance. The crosscutting approach required by the act will provide a much needed basis for more fully integrating a wide array of federal activities as well as a cohesive perspective on the long-term goals of the federal government that is focused on priority policy areas. It could also be a valuable tool for re- examining existing programs government-wide and for considering proposals for new programs. However, the usefulness of these requirements hinges on the effective implementation of the act’s provisions. In our June 2013 review of initial implementation, we reported that the executive branch needed to more fully implement GPRAMA to address pressing governance challenges, such as addressing fragmentation, overlap, and duplication. Moreover, our ongoing work continues to find opportunities to improve implementation of the act. For example, GPRAMA requires the Office of Management and Budget (OMB) to develop an inventory of federal programs. OMB directed 24 large federal agencies to develop and publish inventories of their programs in May 2013. However, our preliminary review of these initial inventories identified concerns about the usefulness of the information being developed and the extent to which it might be able to assist executive branch and congressional efforts to identify and address fragmentation, overlap, and duplication. For example, OMB’s guidance for developing the inventories provided agencies with flexibility to define their programs by such factors as outcomes, customers, products/services, organizational structure, and budget structure. As a result, agencies took various approaches to define their programs. Many used their budget structure while others used different approaches, such as identifying programs by related outcomes or customer focus. The variation in definitions across agencies limits comparability among similar programs. Proposed legislation could help address some of the data limitations we have identified. For example, the proposed Digital Accountability and Transparency Act is intended to improve the accountability and transparency of federal spending data (1) by establishing government- wide financial data standards so that data are comparable across agencies and (2) by holding agencies more accountable for the quality of the information disclosed. Such increased transparency provides opportunities for improving the efficiency and effectiveness of federal spending and improving oversight to prevent and detect fraud, waste, and abuse of federal funds. In conclusion, identifying and addressing instances of fragmentation, overlap, and duplication is challenging. While some progress has been made, more work remains. We plan to conduct further analysis to look for additional or emerging instances of fragmentation, overlap, and duplication and opportunities for cost savings or revenue enhancement. Likewise, we will continue to monitor developments in the areas we have already identified in this series. We stand ready to assist this and other committees in further analyzing the issues we have identified and evaluating potential solutions. Chairman Issa, Ranking Member Cummings, and Members of the Committee, this concludes my prepared statement. I would be pleased to answer questions. For further information on this testimony or our April 8, 2014, report, please contact Orice Williams Brown, Managing Director, Financial Markets and Community Investment, who may be reached at (202) 512- 8678 or williamso@gao.gov, and A. Nicole Clowers, Director, Financial Markets and Community Investment, who may be reached at (202) 512- 8678 or clowersa@gao.gov. Contact points for the individual areas listed in our 2014 annual report can be found at the end of each area at http://www.gao.gov/products/GAO-14-343SP. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
As the fiscal pressures facing the government continue, so too does the need for executive branch agencies and Congress to improve the efficiency and effectiveness of government programs and activities. Opportunities to take action exist in areas where federal programs or activities are fragmented, overlapping, or duplicative. To highlight these challenges and to inform government decision makers on actions that could be taken to address them, GAO is statutorily required to identify and report annually to Congress on federal programs, agencies, offices, and initiatives, both within departments and government-wide, which have duplicative goals or activities. GAO has also identified additional opportunities to achieve greater efficiency and effectiveness by means of cost savings or enhanced revenue collection. This statement discusses the (1) new areas identified in GAO's 2014 annual report; (2) status of actions taken by the administration and Congress to address the 162 areas previously identified in GAO's 2011, 2012 and 2013 annual reports; and (3) opportunities to address the issues GAO identified. To identify what actions exist to address these issues and take advantage of opportunities for cost savings and enhanced revenues, GAO reviewed and updated prior work and recommendations for consideration. GAO's 2014 annual report identifies 64 new actions that executive branch agencies and Congress could take to improve the efficiency and effectiveness of 26 areas of government. GAO identifies 11 new areas in which there is evidence of fragmentation, overlap, or duplication. For example, under current law, individuals are allowed to receive concurrent payments from the Disability Insurance and Unemployment programs. Eliminating the overlap in these payments could save the government about $1.2 billion over the next 10 years. GAO also identifies 15 new areas where opportunities exist either to reduce the cost of government operations or enhance revenue collections. For example, Congress could rescind all or part of the remaining $4.2 billion in credit subsidies for the Advanced Technology Vehicles Manufacturing Loan program unless the Department of Energy demonstrates sufficient demand for this funding. The executive branch and Congress have made progress in addressing the approximately 380 actions across 162 areas that GAO identified in its past annual reports. As of March 6, 2014, the date GAO completed its progress update audit work, nearly 20 percent of these areas were addressed, over 60 percent were partially addressed, and about 15 percent were not addressed, as shown in the figure below. Executive branch and congressional efforts to address these and other actions over the past 3 years have resulted in over $10 billion in cost savings with billions of dollars more in cost savings anticipated in future years. Better data and a greater focus on outcomes are essential to improving the efficiency and effectiveness of federal efforts. Currently, there is not a comprehensive list of all federal programs and agencies often lack reliable budgetary and performance information about their own programs. Without knowing the scope, cost, or performance of programs, it is difficult for executive branch agencies or Congress to gauge the magnitude of the federal commitment to a particular area of activity or the extent to which associated federal programs are effectively and efficiently achieving shared goals.
Since the 1960s, the United States has operated two separate operational polar-orbiting meteorological satellite systems: the Polar-orbiting Operational Environmental Satellite (POES) series—managed by NOAA, and the Defense Meteorological Satellite Program (DMSP)—managed by the Air Force. These satellites obtain environmental data that are processed to provide graphical weather images and specialized weather products. These satellite data are also the predominant input to numerical weather prediction models, which are a primary tool for forecasting weather 3 or more days in advance—including forecasting the path and intensity of hurricanes. The weather products and models are used to predict the potential impact of severe weather so that communities and emergency managers can help prevent and mitigate their effects. Polar satellites also provide data used to monitor environmental phenomena, such as ozone depletion and drought conditions, as well as data sets that are used by researchers for a variety of studies such as climate monitoring. Unlike geostationary satellites, which maintain a fixed position relative to the earth, polar-orbiting satellites constantly circle the earth in an almost north-south orbit, providing global coverage of conditions that affect the weather and climate. Each satellite makes about 14 orbits a day. As the earth rotates beneath it, each satellite views the entire earth’s surface twice a day. Currently, there are two operational POES satellites and two operational DMSP satellites that are positioned so that they can observe the earth in early morning, midmorning, and early afternoon polar orbits. Together, they ensure that, for any region of the earth, the data provided to users are generally no more than 6 hours old. Figure 1 illustrates the current operational polar satellite configuration. Besides the four operational satellites, six older satellites are in orbit that still collect some data and are available to provide limited backup to the operational satellites should they degrade or fail. In the future, both NOAA and the Air Force plan to continue to launch additional POES and DMSP satellites every few years, with final launches scheduled for 2009 and 2012, respectively. Each of the polar satellites carries a suite of sensors designed to detect environmental data that are either reflected or emitted from the earth, the atmosphere, and space. The satellites broadcast a subset of these data in real time to properly equipped field terminals that are within a direct line of sight; these field terminals are located at universities, on battlefields, or on ships. Additionally, the polar satellites store the observed environmental data and then transmit them to NOAA and Air Force ground stations when the satellites pass overhead. The ground stations then relay the data via communications satellites to the appropriate meteorological centers for processing. Under a shared processing agreement among four satellite data processing centers—NOAA’s National Environmental Satellite Data and Information Service (NESDIS), the Air Force Weather Agency, the Navy’s Fleet Numerical Meteorology and Oceanography Center, and the Naval Oceanographic Office—different centers are responsible for producing and distributing, via a shared network, different environmental data sets, specialized weather and oceanographic products, and weather prediction model outputs. Each of the four processing centers is also responsible for distributing the data to its respective users. For the DOD centers, the users include regional meteorology and oceanography centers, as well as meteorology and oceanography staff on military bases, the Naval Fleet, and mobile field sites. NESDIS forwards the data to NOAA’s National Weather Service for distribution and use by government and commercial forecasters. The processing centers also use the Internet to distribute data to the general public. NESDIS is responsible for the long-term archiving of data and derived products from POES and DMSP. Figure 2 depicts a generic data relay pattern from the polar-orbiting satellites to the data processing centers and field terminals. Polar satellites gather a broad range of data that are transformed into a variety of products. Satellite sensors observe different bands of radiation wavelengths, called channels, which are used for remotely determining information about the earth’s atmosphere, land surface, oceans, and the space environment. When first received, satellite data are considered raw data. To make them usable, the processing centers format the data so that they are time-sequenced and include earth location and calibration information. After formatting, these data are called raw data records. The centers further process these raw data records into channel-specific data sets, called sensor data records and temperature data records. These data records are then used to derive weather and climate products called environmental data records (EDR). EDRs include a wide range of atmospheric products detailing cloud coverage, temperature, humidity, and ozone distribution; land surface products showing snow cover, vegetation, and land use; ocean products depicting sea surface temperatures, sea ice, and wave height; and characterizations of the space environment. Combinations of these data records (raw, sensor, temperature, and environmental data records) are also used to derive more sophisticated products, including outputs from numerical weather models and assessments of climate trends. Figure 3 is a simplified depiction of the various stages of satellite data processing, and figures 4 and 5 depict examples of EDR weather products. With the expectation that combining the POES and DMSP programs would reduce duplication and result in sizable cost savings, a May 1994 Presidential Decision Directive required NOAA and DOD to converge the two satellite programs into a single satellite program capable of satisfying both civilian and military requirements. The converged program, NPOESS, is considered critical to the United States’ ability to maintain the continuity of data required for weather forecasting and global climate monitoring through the year 2026. To manage this program, DOD, NOAA, and NASA formed the tri-agency Integrated Program Office, located within NOAA. Within the program office, each agency has the lead on certain activities: NOAA has overall program management responsibility for the converged system and for satellite operations; DOD has the lead on the acquisition; and NASA has primary responsibility for facilitating the development and incorporation of new technologies into the converged system. NOAA and DOD share the costs of funding NPOESS, while NASA funds specific technology projects and studies. Figure 6 depicts the organizations that make up the NPOESS program office and lists their responsibilities. The NPOESS program office is overseen by an Executive Committee, which is made up of the Administrators of NOAA and NASA and the Undersecretary of the Air Force. NPOESS is a major system acquisition that was originally estimated to cost about $6.5 billion over the 24-year life of the program from its inception in 1995 through 2018. The program is to provide satellite development, satellite launch and operation, and ground-based satellite data processing. These deliverables are grouped into four main categories: (1) the space segment, which includes the satellites and sensors; (2) the integrated data processing segment, which is the system for transforming raw data into EDRs and is to be located at the four processing centers; (3) the command, control, and communications segment, which includes the equipment and services needed to support satellite operations; and (4) the launch segment, which includes the launch vehicle services. When the NPOESS engineering, manufacturing, and development contract was awarded in August 2002, the cost estimate was adjusted to $7 billion. Acquisition plans called for the procurement and launch of six satellites over the life of the program, as well as the integration of 13 instruments— consisting of 10 environmental sensors and three subsystems. Together, the sensors were to receive and transmit data on atmospheric, cloud cover, environmental, climatic, oceanographic, and solar-geophysical observations. The subsystems were to support nonenvironmental search and rescue efforts, sensor survivability, and environmental data collection activities. The program office considered 4 of the sensors to be critical because they provide data for key weather products; these sensors are in bold in table 1, which describes each of the expected NPOESS instruments. In addition, NPP was planned as a demonstration satellite to be launched several years before the first NPOESS satellite in order to reduce the risk associated with launching new sensor technologies and to ensure continuity of climate data with NASA’s Earth Observing System satellites. NPP is to host three of the four critical NPOESS sensors (VIIRS, CrIS, and ATMS), as well as one other noncritical sensor (OMPS). NPP is to provide the program office and the processing centers an early opportunity to work with the sensors, ground control, and data processing systems. When the NPOESS development contract was awarded, the schedule for launching the satellites was driven by a requirement that the satellites be available to back up the final POES and DMSP satellites should anything go wrong during the planned launches of these satellites. In general, satellite experts anticipate that roughly 1 out of every 10 satellites will fail either during launch or during early operations after launch. Early program milestones included (1) launching NPP by May 2006, (2) having the first NPOESS satellite available to back up the final POES satellite launch in March 2008, and (3) having the second NPOESS satellite available to back up the final DMSP satellite launch in October 2009. If the NPOESS satellites were not needed to back up the final predecessor satellites, their anticipated launch dates would have been April 2009 and June 2011, respectively. Over the last few years, NPOESS has experienced continued cost increases and schedule delays, requiring difficult decisions to be made about the program’s direction and capabilities. In 2003, we reported that changes in the NPOESS funding stream caused a delay in the program’s schedule. Specifically, in late 2002, DOD shifted the expected launch date for its final DMSP satellite from 2009 to 2010. As a result, the department reduced funding for NPOESS by about $65 million between fiscal years 2004 and 2007. According to program officials, because NOAA was required to provide the same level of funding that DOD provides, this change triggered a corresponding reduction in funding by NOAA for those years. As a result of the reduced funding, program officials were forced to make difficult decisions about what to focus on first. The program office decided to keep NPP as close to its original schedule as possible because of its importance to the eventual NPOESS development and to shift some of the program’s deliverables to later years. This shift affected the NPOESS deployment schedule. To plan for this shift, the program office developed a new program cost and schedule baseline. After this new baseline was completed in 2004, we reported that the program office increased the NPOESS cost estimate from about $7 billion to $8.1 billion; delayed key milestones, including the planned launch of the first NPOESS satellite—which was delayed by 7 months; and extended the life of the program from 2018 to 2020. The cost increases reflected changes to the NPOESS contract, as well as increased program management funds. According to the program office, contract changes included extension of the development schedule, increased sensor costs, and additional funds needed for mitigating risks. Increased program management funds were added for noncontract costs and management reserves. At that time, we also noted that other factors could further affect the revised cost and schedule estimates. Specifically, the contractor was not meeting expected cost and schedule targets on the new baseline because of technical issues in the development of key sensors, including the critical VIIRS sensor. Based on its performance through May 2004, we estimated that the contractor would most likely overrun its contract at completion in September 2011 by $500 million—thereby increasing the projected life cycle cost to $8.6 billion. In addition, we reported that risks associated with the development of the critical sensors, integrated data processing system, and algorithms, among other things, could contribute to further cost increases and schedule slips—and we noted that continued oversight was critical. The program office’s baseline cost estimate was subsequently adjusted to $8.4 billion. In mid-November 2005, we reported that NPOESS continued to experience problems in the development of a key sensor, resulting in schedule delays and anticipated cost increases. At that time, we projected that the program’s cost estimate had grown to about $10 billion based on contractor cost and schedule data. We reported that the program’s issues were due, in part, to problems at multiple levels of management— including subcontractor, contractor, program office, and executive leadership. Recognizing that the budget for the program was no longer executable, the NPOESS Executive Committee planned to make a decision in December 2005 on the future direction of the program—what would be delivered, at what cost, and by when. This involved deciding among options involving increased costs, delayed schedules, and reduced functionality. We noted that continued oversight, strong leadership, and timely decision making were more critical than ever and we urged the committee to make a decision quickly so that the program could proceed. However, we subsequently reported that, in late November 2005, NPOESS cost growth exceeded a legislatively mandated threshold that requires DOD to certify the program to Congress. This placed any decision about the future direction of the program on hold until the certification took place in June 2006. In the meantime, the program office implemented an interim program plan for fiscal year 2006 to continue work on key sensors and other program elements using fiscal year 2006 funding. The Nunn-McCurdy law requires DOD to take specific actions when a major defense acquisition program exceeds certain cost thresholds. In November 2005, key provisions of the act required the Secretary of Defense to notify Congress when a major defense acquisition was expected to overrun its project baseline by 15 percent or more and to certify the program to Congress when it was expected to overrun its baseline by 25 percent or more. At that time, NPOESS exceeded the 25 percent threshold, and DOD was required to certify the program. Certifying a program entailed providing a determination that (1) the program is essential to national security, (2) there are no alternatives to the program that will provide equal or greater military capability at less cost, (3) the new estimates of the program’s cost are reasonable, and (4) the management structure for the program is adequate to manage and control costs. DOD established tri-agency teams—made up of DOD, NOAA, and NASA experts—to work on each of the four elements of the certification process. In June 2006, DOD (with the agreement of both of its partner agencies) certified a restructured NPOESS program, estimated to cost $12.5 billion through 2026. This decision approved a cost increase of $4 billion over the prior approved baseline cost and delayed the launch of NPP and the first two satellites by roughly 3 to 5 years. The new program also entailed establishing a stronger program management structure, reducing the number of satellites to be produced and launched from 6 to 4, and reducing the number of instruments on the satellites from 13 to 9— consisting of 7 environmental sensors and 2 subsystems. It also entailed using NPOESS satellites in the early morning and afternoon orbits and relying on European satellites for midmorning orbit data. Table 2 summarizes the major program changes made under the Nunn-McCurdy certification decision. The Nunn-McCurdy certification decision established new milestones for the delivery of key program elements, including launching NPP by January 2010, launching the first NPOESS satellite (called C1) by January 2013, and launching the second NPOESS satellite (called C2) by January 2016. These revised milestones deviated from prior plans to have the first NPOESS satellite available to back up the final POES satellite should anything go wrong during that launch. Delaying the launch of the first NPOESS satellite means that if the final POES satellite fails on launch, satellite data users would need to rely on the existing constellation of environmental satellites until NPP data becomes available—almost 2 years later. Although NPP was not intended to be an operational asset, NASA agreed to move NPP to a different orbit so that its data would be available in the event of a premature failure of the final POES satellite. However, NPP will not provide all of the operational capability planned for the NPOESS spacecraft. If the health of the existing constellation of satellites diminishes—or if NPP data is not available, timely, and reliable—then there could be a gap in environmental satellite data. Table 3 summarizes changes in key program milestones over time. In order to reduce program complexity, the Nunn-McCurdy certification decision decreased the number of NPOESS sensors from 13 to 9 and reduced the functionality of 4 sensors. Specifically, of the 13 original sensors, 5 sensors remain unchanged, 3 were replaced with less capable sensors, 1 was modified to provide less functionality, and 4 were cancelled. Table 4 shows the changes to NPOESS sensors, including the 4 identified in bold as critical sensors. The changes in NPOESS sensors affected the number and quality of the resulting weather and environmental products, called EDRs. In selecting sensors for the restructured program, the Nunn-McCurdy process placed the highest priority on continuing current operational weather capabilities and a lower priority on obtaining selected environmental and climate measuring capabilities. As a result, the revised NPOESS system has significantly less capability for providing global climate measures than was originally planned. Specifically, the number of EDRs was decreased from 55 to 39, of which 6 are of a reduced quality. The 39 EDRs that remain include cloud base height, land surface temperature, precipitation type and rate, and sea surface winds. The 16 EDRs that were removed include cloud particle size and distribution, sea surface height, net solar radiation at the top of the atmosphere, and products to depict the electric fields in the space environment. The 6 EDRs that are of a reduced quality include ozone profile, soil moisture, and multiple products depicting energy in the space environment. Given the changes in planned sensors, program officials established a planned configuration for NPP and the four satellites of the NPOESS program, called C1, C2, C3, and C4 (see table 5). Program officials acknowledged that this configuration could change if other parties decided to develop the sensors that were cancelled. However, they noted that the planned configuration of the first satellite cannot change without increasing the risk that the launch will be delayed. To be effective, project managers need current information on a contractor’s progress in meeting contract deliverables. One method that can help project managers track this progress is earned value management. This method, used by DOD for several decades, compares the value of work accomplished during a given period with that of the work expected in that period. Differences from expectations are measured in both cost and schedule variances. Cost variances compare the earned value of the completed work with the actual cost of the work performed. For example, if a contractor completed $5 million worth of work and the work actually cost $6.7 million, there would be a –$1.7 million cost variance. Schedule variances are also measured in dollars, but they compare the earned value of the work completed with the value of work that was expected to be completed. For example, if a contractor completed $5 million worth of work at the end of the month but was budgeted to complete $10 million worth of work, there would be a –$5 million schedule variance. Positive variances indicate that activities are costing less or are completed ahead of schedule. Negative variances indicate activities are costing more or are falling behind schedule. These cost and schedule variances can then be used in estimating the cost and time needed to complete the program. Since the June 2006 decision to revise the scope, cost, and schedule of the NPOESS program, the program office has made progress in restructuring the satellite acquisition; however, important tasks leading up to revising and finalizing contract changes remain to be completed. Restructuring a major acquisition program like NPOESS is a process that involves identifying time critical and high priority work and keeping this work moving forward, while reassessing development priorities, interdependencies, deliverables, risks, and costs. It also involves revising important acquisition documents including the memorandum of agreement on the roles and responsibilities of the three agencies, the acquisition strategy, the system engineering plan, the test and evaluation master plan, the integrated master schedule defining what needs to happen by when, and the acquisition program baseline. The Nunn- McCurdy certification decision required the Secretaries of Defense and Commerce and the Administrator of NASA to sign a revised memorandum of agreement by August 6, 2006. It also required that the program office, Program Executive Officer, and the Executive Committee revise and approve key acquisition documents including the acquisition strategy and system engineering plan by September 1, 2006, in order to proceed with the restructuring. Once these are completed, the program office can proceed to negotiate with its prime contractor on a new program baseline defining what will be delivered, by when, and at what cost. The NPOESS program office has made progress in restructuring the acquisition. Specifically, the program office has established interim program plans guiding the contractor’s work activities in 2006 and 2007 and has made progress in implementing these plans. Specifically, the program office reported that it had completed 156 of 166 key milestones during fiscal year 2006—including completing ambient and thermal vacuum testing of the VIIRS engineering unit. Of the 10 remaining milestones resulting from unanticipated problems in the development of VIIRS and CrIS, 5 have since been completed, and 5 are still pending. The program office plans to complete 222 milestones in fiscal year 2007— including completing performance tests on the OMPS (nadir) sensor—and notes that they are slightly ahead of plans in that they have completed 62 milestones through January 20, 2007, which is 2 more than had been planned. Figures 7 and 8 depict the program office’s progress against key milestones in fiscal year 2006 and to date in fiscal year 2007. The program office has also made progress in revising key acquisition documents. It revised the system engineering plan, the test and evaluation master plan, and the acquisition strategy plan, and obtained approval of these documents by the Program Executive Officer. The program office and contractor also developed an integrated master schedule for the remainder of the program—beyond fiscal year 2007. This integrated master schedule details the steps leading up to launching NPP by September 2009, launching the first NPOESS satellite in January 2013, and launching the second NPOESS satellite in January 2016. Near-term steps include completing and testing the VIIRS, CrIS, and OMPS sensors; integrating these sensors with the NPP spacecraft and completing integration testing; completing the data processing system and integrating it with the command, control, and communications segment; and performing advanced acceptance testing of the overall system of systems for NPP. However, key steps remain for the acquisition restructuring to be completed. These steps include obtaining the approval of the Secretaries of Commerce and Defense and the Administrator of NASA on the memorandum of agreement among the three agencies, and obtaining the approval of the NPOESS Executive Committee on key acquisition documents, including the system engineering plan, the test and evaluation master plan, and the acquisition strategy. These approvals are currently over 6 months past due. Agency officials noted that the September 1, 2006, due date for the key acquisition documents was not realistic given the complexity of coordinating documents among three different agencies, but did not provide a new estimate for when these documents would be approved. Finalizing these documents is critical to ensuring interagency agreements and will allow the program office to move forward in completing other activities related to restructuring the program. These activities include conducting an integrated baseline review with the contractor to reach agreement on the schedule and work activities, and finalizing changes to the NPOESS development and production contract—thereby allowing the program office to lock down a new acquisition baseline cost and schedule. The program office expects to conduct an integrated baseline review by May 2007 and to finalize the contract changes by July 2007. Until key acquisition documents are finalized and approved, the program faces increased risk that it will not be able to complete important restructuring activities in time to move forward in fiscal year 2008 with a new program baseline in place. This places the NPOESS program at risk of continued delays and future cost increases. The NPOESS program has made progress in establishing an effective management structure, but—almost a year after this structure was endorsed during the Nunn-McCurdy certification process—the Integrated Program Office still faces staffing problems. Over the past few years, we and others have raised concerns about management problems at all levels of the NPOESS program, including subcontractor and contractor management, program office management, and executive-level management. Two independent review teams also noted a shortage of skilled program staff, including budget analysts and system engineers. Since that time, the NPOESS program has made progress in establishing an effective management structure—including establishing a new organizational framework with increased oversight by program executives, instituting more frequent subcontractor, contractor, and program reviews, and effectively managing risks and performance. However, DOD’s plans for reassigning the Program Executive Officer in Summer 2007 increase the program’s risks. Additionally, the program lacks a staffing process that clearly identifies staffing needs, gaps, and plans for filling those gaps. As a result, the program office has experienced delays in getting core management activities under way and lacks the staff it needs to execute day-to-day management activities. The NPOESS program has made progress in establishing an effective management structure and increasing the frequency and intensity of its oversight activities. Over the past few years, we and others have raised concerns about management problems at all levels of management on the NPOESS program, including subcontractor and contractor management, program office management, and executive-level management. In response to recommendations made by two different independent review teams, the program office began exploring options in late 2005 and early 2006 for revising its management structure. In November 2005, the Executive Committee established and filled a Program Executive Officer position, senior to the NPOESS Program Director, to streamline decision making and to provide oversight to the program. This Program Executive Officer reports directly to the Executive Committee. Subsequently, the Program Executive Officer and the Program Director proposed a revised organizational framework that realigned division managers within the Integrated Program Office responsible for overseeing key elements of the acquisition and increased staffing in key areas. In June 2006, the Nunn-McCurdy certification decision approved this new management structure and the Integrated Program Office implemented it. Figure 9 provides an overview of the relationships among the Integrated Program Office, the Program Executive Office, and the Executive Committee, as well as key divisions within the program office. Operating under this new management structure, the program office implemented more rigorous and frequent subcontractor, contractor, and program reviews, improved visibility into risk management and mitigation activities, and institutionalized the use of earned value management techniques to monitor contractor performance. Specifically, program officials and the prime contractor now review the subcontractors’ cost and schedule performance on a weekly basis. The information from these meetings feeds into monthly government meetings with the prime contractor to review progress against milestones, issues, and risks. Further, the Program Director conducts monthly reviews with each technical division lead to review the divisions’ achievements, risks, and plans. Program officials note that these frequent reviews allow information on risks to be quickly escalated from subcontractors to contractors, to the program component level, and to the Program Director—and they allow program officials to better manage efforts to reduce risks. The program office also reported that all division leads were trained in earned value management techniques and were effectively using these techniques both to monitor subcontractor and contractor performance on a weekly basis and to identify potential problems as soon as possible. In addition to these program office activities, the Program Executive Officer implemented monthly program reviews and increased the frequency of contacts with the Executive Committee. Specifically, the Program Executive Officer holds monthly program management reviews where the Program Director and program division leads (for example, those in charge of systems engineering or ground systems) provide briefings on the program’s earned value, progress, risks, and concerns. We observed that these briefings allow the Program Executive Officer to have direct insight into the challenges and workings of the Integrated Program Office and allow risks to be appropriately escalated and addressed. These meetings also provide an open forum for managers to raise concerns and ask questions about operational challenges. For example, when NASA officials expressed concerns that vibration levels used during testing were higher than necessary and were causing damage to key sensor components, the Program Director and Program Executive Officer immediately established a forum to discuss and mitigate this issue. The Program Executive Officer briefs the Executive Committee in monthly letters, apprising committee members of the program’s status, progress, risks, and earned value and the Executive Committee now meets on a quarterly basis—whereas in the recent past, we reported that the Executive Committee had met only five times in 2 years. While the NPOESS program has made progress in establishing an effective management structure, this progress is currently at risk. We recently reported that DOD space acquisitions are at increased risk due in part to frequent turnover in leadership positions, and we suggested that addressing this will require DOD to consider matching officials’ tenure with the development or delivery of a product. In March 2007, NPOESS program officials stated that DOD is planning to reassign the recently appointed Program Executive Officer in Summer 2007 as part of this executive’s natural career progression. As of March 2007, the Program Executive Officer has held this position for 16 months. Given that the program is currently still being restructured, and that there are significant challenges in being able to meet critical deadlines to ensure satellite data continuity, such a move adds unnecessary risk to an already risky program. The NPOESS program office has filled key vacancies in recent months but lacks a staffing process that identifies programwide staffing requirements and plans for filling those needed positions. Sound human capital management calls for establishing a process or plan for determining staffing requirements, identifying any gaps in staffing, and planning to fill critical staffing gaps. Program office staffing is especially important for NPOESS, given the acknowledgment by multiple independent review teams that staffing shortfalls contributed to past problems. Specifically, these review teams noted shortages in the number of system engineers needed to provide adequate oversight of subcontractor and contractor engineering activities and in the number of budget and cost analysts needed to assess contractor cost and earned value reports. To rectify this situation, the June 2006 certification decision directed the Program Director to take immediate actions to fill vacant positions at the program office with the approval of the Program Executive Officer. Since the June 2006 decision to revise NPOESS management structure, the program office has filled multiple critical positions, including a budget officer, a chief system engineer, an algorithm division chief, and a contracts director. In addition, on an ad hoc basis, individual division managers have assessed their needs and initiated plans to hire individuals for key positions. However, almost a year after the certification, the program office still lacks a programwide process for identifying and filling all needed positions. As a result, division managers often wait months for critical positions to be filled. For example, in February 2006, the NPOESS program estimated that it needed to hire up to 10 new budget analysts. As of September 2006, none of these positions had been filled. Today, program officials estimate that they only needed to fill 7 budget analyst positions, of which 2 positions have been filled, and 5 remain vacant. Additionally, even though the certification decision directed immediate action to fill critical vacancies, the program still has vacancies in 5 systems engineering positions and 10 technical manager positions. The majority of the vacancies—4 of the 5 budget positions, 4 of the 5 systems engineering positions, and 8 of the 10 technical manager positions—are to be provided by NOAA. NOAA officials noted that each of these positions is in some stage of being filled—that is, recruitment packages are being developed or reviewed, vacancies are being advertised, or candidates are being interviewed, selected, and approved. The program office attributes its staffing delays to not having the right personnel in place to facilitate this process—and did not even begin to develop a staffing process—until November 2006. Program officials noted that the tri-agency nature of the program adds unusual layers of complexity to the hiring and administrative functions because each agency has its own hiring and performance management rules. In November 2006, the program office brought in an administrative officer who took the lead in pulling together the division managers’ individual assessments of needed staff—currently estimated to be 25 vacant positions—and has been working with the division managers to refine this list. This new administrative officer plans to train division managers in how to assess their needs and to hire needed staff and to develop a process by which evolving needs are identified and positions are filled. However, there is as yet no date set for establishing this basic programwide staffing process. As a result of the lack of a programwide staffing process, there has been an extended delay in determining what staff are needed and in bringing those staff on board—which has resulted in delays in performing core management activities such as establishing the program office’s cost estimate and bringing in needed contracting expertise. Additionally, until a programwide staffing process is in place, the program office risks not having the staff it needs to execute day-to-day management activities. In June 2006, DOD certified a restructured NPOESS program that was estimated to cost $11.5 billion for the acquisition portion of the program and scheduled to launch the first satellite in 2013. The Office of the Secretary of Defense’s Cost Analysis Improvement Group (cost analysis group)—the independent cost estimators charged with developing the estimate for the acquisition portion of the program—used an acceptable methodology to develop this estimate. When combined with an estimated $1 billion for operations and support after launch, this brings the program life cycle cost to $12.5 billion. Recent events, however, could further increase program costs or delay schedules. Specifically, the program continues to experience technical problems on key sensors, and costs and schedules will be adjusted during negotiations on contract changes. The NPOESS program office is developing its own cost estimate to refine the one developed in June 2006 that it will use to negotiate contract changes. A new baseline cost will be established once the contract is finalized. The cost and schedule estimate for the restructured NPOESS program was developed by DOD’s cost analysis group using an acceptable methodology. Cost-estimating organizations throughout the federal government and industry use certain key practices—related to planning, conducting, and reporting the estimate—to ensure a sound estimate. Table 6 lists the elements of a sound cost estimating methodology. In addition, to ensure the validity of the data assumptions that go into the estimate, leading organizations use actual historical costs and seek an independent validation of critical cost drivers. DOD’s cost analysis group used an acceptable methodology in developing the NPOESS cost estimate in that they planned, conducted, and reported the estimate consistent with leading practices. The cost analysis group’s cost estimating approach was largely driven by the program’s principal “ground rule” to maintain the continuity of weather data without a gap. Specifically, the cost analysis group assessed two risks: (1) the uncertainty of the health of the current polar-satellite constellation and (2) the uncertainty of when the new satellite system could be delivered (including the time needed to evaluate new satellites once in orbit). The resulting analysis showed that the restructured NPOESS system could be delivered and the first satellite launched by 2013 with a high level of confidence in maintaining satellite data continuity. To determine specific costs, the group used the existing work breakdown structure employed by the program office as the basis for performing its work. This work breakdown structure consists of seven major elements, including ground systems; spacecraft; sensors; assembly, integration and test; system engineering/contractor program management; government program management; and launch. The cost analysis group also took steps to ensure the validity of the data that went into the estimate. For each element, the cost analysis group visited all major contractor sites to collect program data including schedule (including the original, rebaselined, and current schedules, and risks affecting the current schedule); current staffing profile by month; the history of staffing used; the qualifications of people charging the program; the program’s technical approaches; the contractor’s program legacy (a justification that the contractor has worked on similar projects in the past and that the contractor should be able to adapt that knowledge to the current work). The cost analysis group also compared this data with contractor labor rates from the Defense Contract Management Agency and obtained NASA’s validation of the costs associated with the most significant cost driver, the VIIRS sensor. Since schedule was the primary uncertainty factor in the cost analysis, it also was the driver of overall costs. Specifically, the cost analysis group took its risk-adjusted schedule durations for the major cost elements and adjusted the contractor-submitted manning profiles accordingly. They then used NPOESS historical data on labor rates and materials to calculate the cost of these elements. Consistent with DOD practice, the cost analysis group established its cost estimate at a 50 percent confidence level. However, cost analysts could not provide an upper limit for potential cost growth, explaining that the program contains “failsafe” measures to use alternative technologies (such as using legacy systems) if schedules are delayed and costs increase. As a result, cost analysts reported that they have a high level of confidence that acquisition costs will not exceed $11.5 billion—but a lower level of confidence that the configuration of sensors will remain unchanged. While the June 2006 cost estimate for the acquisition portion of the program was reasonable at the time it was made, several recent events could cause program life cycle costs to grow or schedules to be delayed. Specifically, the program continues to experience technical problems on key sensors. The CrIS sensor being developed for the NPP satellite suffered a major structural failure in October 2006. A failure review board is currently working to resolve the root causes of the failure. While program officials note that they should be able to cover costs related to investigating the problem, the full cost and schedule to fix the sensor is not yet known. Also, VIIRS development, which has been the program’s primary cost driver, is not yet complete and continues to be a high-risk development. This too, could lead to higher final program costs or delayed schedules. Program costs are also likely to be adjusted during upcoming negotiations on contract changes. The NPOESS program office is developing its own cost estimate to refine the one developed in June 2006. Program officials plan to use this revised cost estimate to negotiate contract changes. A new baseline cost will be established once the contract is finalized—an event that the Program Director expects to occur by July 2007. Major segments of the NPOESS program—the space segment, the ground systems segment, and the launch segment—are under development; however, significant problems have occurred and risks remain. The program office is aware of these risks and is working to mitigate them, but continued problems could affect the program’s overall cost and schedule. Given the tight time frames for completing key sensors, integrating them on the NPP spacecraft, and getting the ground-based data processing systems developed, tested, and deployed, it will be important for the NPOESS Integrated Program Office, the Program Executive Office, and the Executive Committee to continue to provide close oversight of milestones and risks. The space segment includes the sensors and the spacecraft. Four sensors are of critical importance—VIIRS, CrIS, OMPS, and ATMS—because they are to be launched on the NPP satellite. Initiating work on another sensor, the Microwave imager/sounder, is also important because this new sensor—replacing the cancelled CMIS sensor—will need to be developed in time for the second NPOESS satellite launch. Over the past year, the program made progress on each of the sensors and the spacecraft. However, two sensors, VIIRS and CrIS, have experienced major problems. The status of each of the components of the space segment is described in table 7. Earned value management tools are used to compare the value of work accomplished with the work expected during a given time period, and any differences are measured in cost and schedule variances. The NPOESS space segment experienced negative cost and schedule variances between January 2006 and January 2007 (see fig. 10). From January 2006 to January 2007, the contractor exceeded cost targets for the space segment by $17 million—which is 4 percent of the space segment budget for that time period. Similarly, the contractor was unable to complete $14.6 million worth of work in the space segment. The main factors in the cost and schedule variances were due to underestimation of the scope of work, pulling resources from lower priority tasks to higher priority items, and unforeseen design issues on key sensors. For example, VIIRS continued to experience negative cost variance trends due to unplanned efforts, which included refurbishing and recertifying the VIIRS calibration chamber, completing the testing of the engineering design unit, and resolving a problem with the testing equipment needed to adjust VIIRS’ temperature during a key test. Unplanned efforts for CrIS that attributed to the negative cost and schedule variances included additional time required for testing and material management. The schedule variances for VIIRS and CrIS were mainly due to resources being pulled from other areas to support higher priority tasks, extended testing and testing delays, management changes, and improper material handling. Further, there is a high likelihood that CrIS will continue to experience cost and schedule variances against the fiscal year 2007 interim program plan until the issues that caused its structural failure are addressed. Program officials regularly track risks associated with various NPOESS components and work to mitigate them. Having identified both VIIRS and CrIS as high risk, OMPS as a moderate risk, and the other components as low risk, the program office is working closely with the contractors and subcontractors to resolve sensor problems. Program officials have identified work-arounds that will allow them to move forward in testing the VIIRS engineering unit and have approved the flight unit to proceed to a technical readiness review milestone in May 2007. Regarding CrIS, as of March 2007, a failure review board identified root causes of its structural failure, identified plans for resolving them, and initiated inspections of sensor modules and subsystems for damage. An agency official reported that there is sufficient funding in the fiscal year 2007 program office’s and contractor’s management reserve funds to allow for troubleshooting both VIIRS and CrIS problems. However, until the CrIS failure review board fully determines the amount of rework that is necessary to fix the problems, it is unknown if additional funds will be needed or if the time frame for CrIS’ delivery will be delayed. According to agency officials, CrIS is not on the program schedule’s critical path, and there is sufficient schedule margin to absorb the time it will take to conduct a thorough failure review process. Managing the risks associated with the development of VIIRS and CrIS are of particular importance because these are to be demonstrated on the NPP satellite currently scheduled for launch in September 2009. Additionally, any delay in the NPP launch date could affect the overall NPOESS program because the success of the program depends on the lessons learned in data processing and system integration from the NPP satellite. Development of the ground segment—which includes the interface data processing system, the ground stations that are to receive satellite data, and the ground-based command, control, and communications system—is under way and on track. However, important work pertaining to developing the algorithms that translate satellite data into weather products within the integrated data processing segment remains to be completed. Table 8 describes each of the components of the ground segment and identifies the status of each. Additionally, appendix II provides an overview of satellite data processing algorithms. Using contractor-provided data, our analysis indicates cost and schedule performance on key elements of the NPOESS ground segment were generally on track or positive against the fiscal year 2006 and 2007 interim program plans. For the IDPS component, the contractor completed slightly less work than planned and finished slightly under budget. This caused cost and schedule variances of less than 1 percent off of expectations. (see fig. 11). For the command, control, and communications component, the contractor was able to outperform its planned targets by finishing under budget by $3 million (6.2 percent of the budget for this time period) and by completing $31,000 (less than 1 percent) worth of work beyond what was planned (see fig. 12). The NPOESS program office plans to continue to address risks facing IDPS development. Specifically, the IDPS team is working to reduce data processing delays by seeking to limit the number of data calls, improve the efficiency of the data management system, increase the efficiency of the algorithms, and increase the number of processors. The program office also developed a resource center consisting of a logical technical library, a data archive, and a set of analytical tools to coordinate, communicate, and facilitate the work of algorithm subject matter experts on algorithm development and calibration/validation preparations. Managing the risks associated with the development of the IDPS system is of particular importance because this system will be needed to process NPP data. Different agencies are responsible for launching NPP and NPOESS. NASA is responsible for the NPP launch and began procuring the launch vehicle for NPP in August 2006. Program officials expect to have it delivered by July 2009, less than 2 months before the scheduled NPP launch in September 2009. The NPOESS Integrated Program Office is responsible for launching the NPOESS satellites. According to program officials, the Air Force is to procure launch services for the program through DOD’s Evolved Expendable Launch Vehicle contract. These services are to be procured by January 2011, 2 years before the first scheduled launch. NPOESS restructuring is well under way, and the program has made progress in establishing an effective management structure. However, key steps remain in restructuring the acquisition, including completing important acquisition documents such as the system engineering plan, the acquisition program baseline, and the memorandum of agreement documenting the three agencies’ roles and responsibilities. Until these key documents are finalized, the program is unable to finalize plans for restructuring the program. Additionally, the program office continues to have difficulty filling key positions and lacks a programwide staffing process. Until the program establishes an effective and repeatable staffing process, it will have difficulties in identifying and filling its staffing needs in a timely manner. Having insufficient staff in key positions impedes the program office’s ability to conduct important management and oversight activities, including revising cost and schedule estimates, monitoring progress, and managing technical risks. The program faces even further challenges if DOD proceeds with plans to reassign the Program Executive Officer this summer. Such a move would add unnecessary risk to an already risky program. In addition, the likelihood exists that there will be further cost increases and schedule delays because of technical problems on key sensors and pending contract negotiations. Major program segments—including the space and ground segments—are making progress in their development and testing. However, two critical sensors have experienced problems and are considered high risk, and risks remain in developing and implementing the ground-based data processing system. Given the tight time frames for completing key sensors, integrating them, and getting the ground-based data processing systems developed, tested, and deployed, continued close oversight of milestones and risks is essential to minimize potential cost increases and schedule delays. Because of the importance of effectively managing the NPOESS program to ensure that there are no gaps in the continuity of critical weather and environmental observations, we are making recommendations to the Secretaries of Defense and Commerce and to the Administrator of NASA to ensure that the responsible executives within their respective organizations approve key acquisition documents, including the memorandum of agreement among the three agencies, the system engineering plan, the test and evaluation master plan, and the acquisition strategy, as quickly as possible but no later than April 30, 2007. We are also recommending that the Secretary of Defense direct the Air Force to delay reassigning the recently appointed Program Executive Officer until all sensors have been delivered to the NPOESS Preparatory Program; these deliveries are currently scheduled to occur by July 2008. We are also making two additional recommendations to the Secretary of Commerce. We recommend that the Secretary direct the Undersecretary of Commerce for Oceans and Atmosphere to ensure that NPOESS program authorities develop and implement a written process for identifying and addressing human capital needs and for streamlining how the program handles the three different agencies’ administrative procedures, and establish a plan for immediately filling needed positions. We received written comments on a draft of this report from the Deputy Secretary of the Department of Commerce (see app. III), the Deputy Assistant Secretary for Networks and Information Integration of the Department of Defense (see app. IV), and the Deputy Administrator of the National Aeronautics and Space Administration (see app. V). All three agencies agreed that it was important to finalize key acquisition documents in a timely manner, and DOD proposed extending the due dates for the documents to July 2, 2007. Because the NPOESS program office intends to complete contract negotiations by July 4, 2007, we remain concerned that any further delays in approving the documents could delay contract negotiations and thus increase the risk to the program. In addition, the Department of Commerce agreed with our recommendation to develop and implement a written process for identifying and addressing human capital needs and to streamline how the program handles the three different agencies’ administrative procedures. The department also agreed with our recommendation to plan to immediately fill open positions at the NPOESS program office. Commerce noted that NOAA identified the skill sets needed for the program and has implemented an accelerated hiring model and schedule to fill all NOAA positions in the NPOESS program. The department also stated that the Program Director will begin presenting the detailed staffing information at monthly program management reviews, including identifying any barriers and recommended corrective actions. Commerce also noted that NOAA has made NPOESS hiring a high priority and has documented a strategy— including milestones—to ensure that all 20 needed positions are filled by June 2007. DOD did not concur with our recommendation to delay reassigning the Program Executive Officer, noting that the NPOESS System Program Director responsible for executing the acquisition program would remain in place for 4 years. The Department of Commerce also noted that the Program Executive Officer position is planned to rotate between the Air Force and NOAA. Commerce also stated that a selection would be made prior to the departure of the current Program Executive Officer to provide an overlap period to allow for knowledge transfer and ensure continuity. However, over the last few years, we and others (including an independent review team and the Commerce Inspector General) have reported that ineffective executive-level oversight helped foster the NPOESS program’s cost and schedule overruns. We remain concerned that reassigning the Program Executive at a time when NPOESS is still facing critical cost, schedule, and technical challenges will place the program at further risk. While it is important that the System Program Director remain in place to ensure continuity in executing the acquisition, this position does not ensure continuity in the functions of the Program Executive Officer. The current Program Executive Officer is experienced in providing oversight of the progress, issues, and challenges facing NPOESS and coordinating with Executive Committee members, as well as DOD authorities responsible for executing Nunn-McCurdy requirements. Additionally, while the Program Executive Officer position is planned to rotate between agencies, the memorandum of agreement documenting this arrangement is still in draft and should be flexible enough to allow the current Program Executive Officer to remain until critical risks have been addressed. Further, while Commerce plans to allow a period of overlap between the selection of a new Program Executive Officer and the departure of the current one, time is running out. The current Program Executive Officer is expected to depart in early July 2007 and, as of mid-April 2007, a successor has not yet been named. NPOESS is an extremely complex acquisition, involving three agencies, multiple contractors, and advanced technologies. There is not sufficient time to transfer knowledge and develop the sound professional working relationships that the new Program Executive Officer will need to succeed in that role. Thus, we remain convinced that given NPOESS’s current challenges, reassigning the current Program Executive Officer at this time would not be appropriate. All three agencies also provided technical comments, which we have incorporated in this report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Secretary of Commerce, the Secretary of Defense, the Administrator of NASA, the Director of the Office of Management and Budget, and other interested parties. In addition, this report will be available at no charge on our Web site at http://www.gao.gov. If you have any questions on matters discussed in this report, please contact me at (202) 512-9286 or by e-mail at pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Our objectives were to (1) evaluate the National Polar-orbiting Operational Environmental Satellite System (NPOESS) program office’s progress in restructuring the acquisition; (2) evaluate the program office’s progress in establishing an effective management structure; (3) assess the reliability of the new life cycle cost estimate and proposed schedule; and (4) identify the status and key risks facing the program’s major segments (the launch, space, data processing, and ground control segments) and evaluate the adequacy of the program’s efforts to mitigate these risks. To evaluate the NPOESS program office’s progress in restructuring the acquisition program, we reviewed the program’s Nunn-McCurdy certification decision memo and program documentation including status briefings and milestone progress reports. We also interviewed program office officials and attended conferences and senior-level management program review meetings to obtain information on the program’s acquisition restructuring. To evaluate the program office’s progress in establishing an effective management structure, we reviewed the Nunn-McCurdy decision memo for the program, as well as program documentation and briefings. We assessed the status of efforts to implement recommendations regarding the program’s management structure, including the work of the team responsible for reviewing the management structure under the Nunn- McCurdy review. We also analyzed the program office’s organizational charts and position vacancies. Finally, we interviewed officials responsible for reviewing the management structure of the program under Nunn- McCurdy, attended senior-level management review meetings to obtain information related to the program’s progress in establishing and staffing the new management structure, and interviewed program office officials responsible for human capital issues to obtain clarification on plans and goals for the new management structure. To assess the reliability of the new life cycle cost estimate and proposed schedule, we analyzed the Office of the Secretary of Defense’s Cost Analysis Improvement Group’s (cost analysis group) cost estimating methodology and the assumptions used to develop its independent cost estimate. Specifically, we assessed the cost estimating group’s methodology against 12 best practices recognized by cost-estimating organizations within the federal government and industry for the development of reliable cost estimates. These best practices are also contained in a draft version of our cost guide, which is currently being developed by GAO cost experts. We also assessed cost- and schedule- related data, including the work breakdown structure and detailed schedule risk analyses to determine the reasonableness of the cost analysis group’s assumptions. We also interviewed cost analysis group officials to obtain clarification on cost and schedule estimates and their underlying assumptions. Further, we interviewed program officials to identify any assumptions that may have changed. To identify the status and key risks facing the program’s major segments (the launch, space, data processing, and ground control segments) and to evaluate the adequacy of the program’s efforts to mitigate these risks, we reviewed the program’s Nunn-McCurdy certification decision memo and other program documentation. We analyzed briefings and monthly program management documents to determine the status and risks of the key program segments. We also analyzed earned value management data obtained from the contractor to assess the contractor’s performance to cost and schedule. We reviewed cost reports and program risk management documents and interviewed program officials to determine the program segments’ risks that could negatively affect the program’s ability to maintain the current schedule and cost estimates. We also interviewed agency officials from the National Aeronautics and Space Administration (NASA), the National Oceanic and Atmospheric Administration (NOAA), the Department of Defense (DOD), and the NPOESS program office to determine the status and risks of the key program segments. Finally, we observed senior-level management review meetings and attended conferences to obtain information on the status of the NPOESS program. We performed our work at the NPOESS Integrated Program Office and at DOD, NASA, and NOAA offices in the Washington, D.C., metropolitan area between July 2006 and April 2007 in accordance with generally accepted government auditing standards. Algorithms are sets of instructions, expressed mathematically, that translate satellite sensor measurements into usable information. In the NPOESS program, government contractors are responsible for algorithm development; the program office is responsible for independently validating the algorithms. Scientists develop these algorithms, which are then written as computer code to be incorporated into the interface data processing system (IDPS) operational system. The NPOESS ground system uses three primary types of algorithms: Algorithms to develop raw data records “unpack” the digital packets received by the antennas/IDPS (the ones and zeros) and sent from the satellite, associate the data with the information about the satellite’s location and, finally, translate it back into the data it was when it started at the sensor. Algorithms used to develop sensor and temperature data records allow the on-ground users to understand what the sensor saw. It translates the information from the sensor into a measure of the various forms of energy (e.g., brightness, temperature, radiance). Algorithms used to produce the weather products called environmental data records (EDR) are crosscutting. They combine various data records, as well as other data, in order to produce measures useful to scientists. Additionally, EDRs can be “chained”—that is, the output of one EDR algorithm will become an input into the next EDR algorithm. To illustrate, cloud detection/mask is an important “base” EDR because many EDRs, like sea surface temperature, are only calculated when clouds are not present. Figure 13 shows the flow of the data and algorithms. A corollary to algorithm development is the calibration and validation process. According to a senior algorithm scientist, in this process, once the satellite has been launched, scientists verify that the sensors accurately report what ground conditions are. For example, one EDR from the visible/infrared imager radiometer suite (VIIRS) is “ocean color.” Once the sensor is in orbit, scientists can compare the results that the VIIRS sensor reports on ocean color with the known results from sensors on ocean buoys that also measure ocean color in select locations. Then, if the sensors do not accurately report the ground conditions, scientists can calibrate, or “tweak,” the algorithms used to develop sensor, temperature, and environmental data records to report on ground conditions more accurately. According to an agency official, fully calibrating a simple sensor once it has been launched can take approximately a year. A more complicated sensor can take 18 months to 2 years (see fig. 14). In addition to the contact named above, Colleen Phillips, Assistant Director; Carol Cha; Neil Doherty; Nancy Glover; Kathleen S. Lovett; Karen Richey; and Teresa Smith made key contributions to this report.
The National Polar-orbiting Operational Environmental Satellite System (NPOESS) is a tri-agency acquisition--managed by the Departments of Commerce and Defense and the National Aeronautics and Space Administration--which experienced escalating costs, schedule delays, and technical difficulties. These factors led to a June 2006 decision to restructure the program thereby decreasing the program's complexity, increasing its estimated cost to $12.5 billion, and delaying the first two satellites by 3 to 5 years. GAO was asked to (1) assess progress in restructuring the acquisition, (2) evaluate progress in establishing an effective management structure, (3) assess the reliability of the cost and schedule estimate, and (4) identify the status and key risks facing the program's major segments. To do so, GAO analyzed program and contractor data, attended program reviews, and interviewed program officials The NPOESS program office has made progress in restructuring the acquisition by establishing and implementing interim program plans guiding the contractors' work activities in 2006 and 2007; however, important tasks leading up to finalizing contract changes remain to be completed. Executive approvals of key acquisition documents are about 6 months late--due in part to the complexity of navigating three agencies' approval processes. Delays in finalizing these documents could hinder plans to complete contract negotiations by July 2007 and could keep the program from moving forward in fiscal year 2008 with a new program baseline. The program office has also made progress in establishing an effective management structure by adopting a new organizational framework with increased oversight from program executives and by instituting more frequent and rigorous program reviews; however, plans to reassign the recently appointed Program Executive Officer will likely increase the program's risks. Additionally, the program lacks a process and plan for identifying and filling staffing shortages, which has led to delays in key activities such as cost estimating and contract revisions. Until this process is in place the NPOESS program faces increased risk of further delays. The methodology supporting a June 2006 independent cost estimate with the expectation of initial satellite launch in January 2013 was reliable, but recent events could increase program costs and delay schedules. Specifically, the program continues to experience technical problems on key sensors and program costs will likely be adjusted during upcoming negotiations on contract changes. A new baseline cost and schedule reflecting these factors is expected by July 2007. Development and testing of major NPOESS segments--including key sensors and ground systems--are under way, but significant risks remain. For example, while work continues on key sensors, two of them experienced significant problems and are considered high risk. Additionally, while progress has been made in reducing delays in the data processing system, work remains in refining the algorithms needed to translate sensor observations into useable weather products. Given the tight time frames for completing this work, it will be important for program officials and executives to continue to provide close oversight of milestones and risks.
The Customs Aviation Program was established in 1969 to reduce the level of smuggling, increase smugglers’ risk and cost, and improve detection and apprehension of drug smuggling by aircraft, boats, and vehicles. The Customs Aviation Program gets its authority from a number of sources. The Office of National Drug Control Policy (ONDCP) has designated the Customs Service as the lead federal agency responsible for interdicting the movement of illicit drugs into the United States. In addition, 19 U.S.C. 1590 also provides the specific legal authority under which Customs enforces aviation smuggling laws. Congress provided specific language regarding the operations of the Customs Air Program beginning with Customs fiscal year 1996 appropriation, contained in P.L. 104-52. The provision stated that the program’s operations include, among other things, “the interdiction of narcotics and other goods; the provision of support to Customs and other Federal, State, and local agencies in the enforcement or administration of laws enforced by the Customs Service; and, at the discretion of the Commissioner of Customs, the provision of assistance to Federal, State, and local agencies in other law enforcement and emergency humanitarian efforts.” The Customs Aviation Program is headed by the Executive Director, Air Interdiction Division, located in Washington, D.C. The Executive Director reports to the Assistant Commissioner, Customs Office of Investigations. Its field headquarters, the Customs National Aviation Center (CNAC), located in Oklahoma City, OK, provides operational, administrative, and logistical control and accountability over all Customs aviation resources. In addition, the aviation program also operates its Domestic Air Interdiction Coordination Center (DAICC) in Riverside, CA, which conducts radar surveillance using various radar sources to identify, intercept, and apprehend suspect aircraft, utilizing Customs or other agencies’ air assets. The aviation program maintains 10 air branches and 10 air units, as shown in appendix I. The ten air units are subcomponents of the branches and report to an air branch chief. The aviation program uses a variety of aircraft such as the P-3 long-range aircraft, the Blackhawk helicopter, and the Citation II, a high-speed, multijet fixed-wing aircraft. A detailed inventory of the Customs air fleet and pictures of selected aircraft are shown in table 3 and figure 5. As agreed with your office, we used the approach described in this section to respond to your request. We performed our review at U.S. Customs headquarters; the CNAC in Oklahoma City, OK; the DAICC in Riverside, CA; the Customs Air Branch in Miami, FL; and the Department of Defense’s (DOD) headquarters and DOD’s Southern Command’s headquarters in Miami, FL. We also met with officials at ONDCP, the Drug Enforcement Administration (DEA), the U.S. Interdiction Coordinator, and the U.S. Coast Guard. To determine Customs Aviation Program missions and whether they had changed over time, we interviewed Customs Aviation Program officials and the Assistant Commissioner, Office of Investigations. We also reviewed relevant legislation, executive branch policies and guidance, Customs policies and procedures, the National Drug Control Strategy, and interagency agreements. In addition to these reviews, we interviewed officials at ONDCP, DOD, DEA, and the U.S. Coast Guard. To determine the Customs Aviation Program’s resources and activities for fiscal years 1992 to 1997, we reviewed congressional appropriations to Customs for the program. We examined Customs documents showing staffing, aircraft, and staff support levels for these years. We also reviewed total annual program funding and expenditures by mission. To determine the activities of the aviation program for fiscal years 1992 to 1997, we reviewed expenditures by mission and data on flight hours for fiscal years 1992 through 1997. To determine which aircraft take-off cancellations were related to resource constraints and which were not, we analyzed the reasons for the cancellations. For those cancellations that occurred because an aircraft or aircrew was not available, we categorized as resource dependent. For a small percentage of cancellations (4 percent) we were unable to determine the reason for cancellation. All other cancellations we categorized as not resource dependent. Customs officials agreed with this approach. To determine the adequacy of the performance measures Customs uses to judge the results of its aviation program efforts, we interviewed officials from Customs and other federal agencies involved in drug control and interdiction and reviewed relevant documents provided by these agencies. We reviewed the ONDCP National Drug Control Strategy and Customs documents showing the results of the aviation program over the past 6 fiscal years. To obtain information on Customs Aviation Program performance measures for its antidrug activities, we interviewed officials responsible for the Customs Aviation Program and reviewed key agency documents such as Customs Aviation Program performance plans developed for implementing the Government Performance and Results Act of 1993 (GPRA) P.L. 103-62. We compared the Customs Aviation Program performance measurement plans with GPRA requirements to determine whether they conform to the principles of the act. We did our audit work between April and August 1998 in accordance with generally accepted government auditing standards. Since the establishment of the Customs Aviation Program in 1969, its basic mandate to use air assets to counter the drug smuggling threat has not changed. The program was established to reduce the level of drug smuggling; increase smugglers’ risk and cost; and improve the detection and apprehension of drug smuggling by aircraft, boats, and vehicles. What has changed, however, is the amount of resources spent among the three specific mission areas—border interdiction, foreign counterdrug operations, and other law enforcement support. Program priorities, as measured by the amount of mission flight hours, have shifted from border interdiction to supporting foreign counterdrug operations. The percent of flight hours used to provide support to other law enforcement agencieshas decreased slightly. Key events in Customs Aviation Program history are shown in appendix II. As shown in figure 1, flight hours for the border interdiction mission decreased from about 40 percent of total flight hours in fiscal year 1993 (the earliest year complete data were available) to 24 percent in fiscal year 1997. Flight hours for the foreign counterdrug operations mission increased from less than 1 percent in fiscal year 1993 to 23 percent in fiscal year 1997. During this 5-year period, the other law enforcement support mission decreased slightly from about 59 percent of total mission flight hours to 53 percent. From fiscal year 1993 to fiscal year 1997, the total number of flight hours for all missions decreased over one-third, from about 45,000 hours to about 29,000 hours, as shown in figure 2. An original mission of the aviation program was aimed at border interdiction to counter the air drug smuggling threat along the Southwest border. By 1965, drug smugglers had turned to private aircraft as an effective means of border penetration. By 1969, major unchallenged drug smuggling routes had been established along the entire southern border of the United States. At that time, Customs owned only one single-engine aircraft. By 1972, Customs had acquired 11 fixed-wing aircraft and 8 helicopters to challenge the increasing drug threat and had established air branches in San Diego, CA; Tucson, AZ; Corpus Christi, TX; and Miami, FL. In the early 1980s as the air drug smuggling threat decreased along the Southwest border and increased in the Gulf of Mexico and Florida areas, the Customs Aviation Program, along with other Customs units and other law enforcement agencies, began to address the critical drug smuggling problem facing those areas. DOD assets and Federal Aviation Administration (FAA) radar were dedicated in support of the aviation program’s border interdiction mission. Navy aircraft were used to detect and notify Customs Service aircrews of suspect drug smuggling targets. In the mid-1980s, Customs acquired its first P-3 aircraft for long-range surveillance and patrol activity and initiated its deployment of aerostats (i.e., radar mounted on balloons that are tethered to land bases or ships) to provide detection coverage along the southern border of the United States and the Caribbean area. In 1987, Congress directed the establishment of Command, Control, and Intelligence centers to provide coordinated tactical control among the various agencies for air interdiction. Customs established a center in Richmond Heights, FL, and one in Riverside, CA. In 1994, these centers were consolidated into the DAICC in Riverside, CA. The border interdiction mission is generally accomplished through a four-step process: (1) using DOD or FAA radar or other means, such as failure to file a flight plan with FAA or detection by patrol aircraft, to detect aircraft that are suspected of drug smuggling; (2) dispatching an interceptor aircraft, such as the high-speed, multijet engine Citation II, to physically locate the suspect aircraft and check the aircraft’s registration number through various law enforcement databases to determine whether it has been involved in previous illegal activities; (3) employing tracker aircraft, such as the P-3, to follow the suspect aircraft to its destination; and (4) using a Blackhawk helicopter, which is a military aircraft capable of being staffed with several Customs or other federal, state, or local law enforcement officers, to stop the suspect aircraft when it lands, detain the crew, search the aircraft, and, if appropriate, arrest the suspect(s) for drug smuggling and seize any illegal drugs. As part of its border interdiction mission, Customs aircraft are also deployed to interdict land and marine targets as appropriate. Customs started its foreign counterdrug operations in 1990. They began in Mexico and Central America with Customs aircraft being utilized to provide early detection of drug trafficking flights and other activities. The foreign counterdrug operations were greatly expanded in November of 1993, when President Clinton signed Presidential Decision Directive 14 (PDD-14), which established a new framework for international drug control efforts. PDD-14 directed an international drug control strategy to assist nations showing the political will to combat drug-trafficking organizations and interdict drug trafficking. Additionally, PDD-14 called for a shift in the focus of cocaine interdiction from the transit zone (i.e., the 2-million square-mile area between the United States and South American borders) to the source zone (i.e., countries where cocaine is produced, primarily Columbia and Peru). Customs responded to PDD-14 by dedicating increased resources to its foreign counterdrug operations, primarily in South America, and less to border interdiction. These operations primarily support DOD, which is the lead agency for detecting and monitoring drug smuggling aircraft in the source zone countries. Currently, Customs has aircraft and aircrews in Mexico, Central America, and South America performing counterdrug activities. The Customs Aviation Program supports U.S. foreign counterdrug operations by temporarily assigning aircraft and aircrews from its various air branches and units to Mexico, Central America, and South America. Customs aircraft and aircrews in these operations are used to detect and follow suspect drug trafficking aircraft and, if appropriate, alert host country apprehension forces. Customs aircraft and aircrews are also called upon to fly intelligence-gathering missions in support of U.S. foreign counterdrug activities. The P-3, and the Citation II are used in the foreign counterdrug operations mission. Another original mission of the Customs Aviation Program was to assist other Customs units, the Department of the Treasury, and other federal, state, and local law enforcement agencies by providing other aviation law enforcement support. By 1996, Customs had acquired 61 aircraft, which are largely dedicated to the law enforcement support mission. In fiscal year 1997, Congress terminated the Bureau of Alcohol, Tobacco, and Firearms (ATF) aviation program and directed the Customs Aviation Program to assume ATF’s aviation responsibilities. As a result, Customs established aviation units in Sacramento, CA; Kansas City, KS; and Cincinnati; OH, for this new responsibility. Since 1993, support to other law enforcement agencies, which also included emergency humanitarian efforts, have accounted for about one-half of the Customs Aviation Program’s activities and seizures. The Customs Aviation Program provides support to other law enforcement agencies by using its aircraft to provide surveillance of ongoing criminal investigations, such as undercover operations or following a suspect vehicle. The Customs Aviation Program primarily uses single-engine, fixed-wing aircraft and small helicopters in its law enforcement support role. Between fiscal years 1992 and 1997, the Aviation Program’s overall funding, aircraft mission takeoffs, personnel, and number of aircraft have decreased. As a result of these reductions, Customs air branches have reduced their operations. While Customs’ Aviation Program funding increased slightly in fiscal year 1993, overall its budget, excluding capital investments, decreased between fiscal years 1992 and 1997, as shown in figure 3. In constant or inflation-adjusted dollars, the decrease was 31 percent. The funding level for salaries and expenses, in constant dollars, decreased by about 15 percent. Similarly, funding for operations and maintenance declined by about 40 percent in constant dollars. In fiscal years 1992 through 1994, salaries and expenses comprised just over one-third of the annual program total, compared with just under two-thirds of the total for operations and maintenance. However, in the last 3 fiscal years, salaries and expenses increased to just under half of the total, while operations and maintenance decreased to just over one-half. According to Customs officials, these reductions forced the agency in 1994 to reduce its border interdiction response from 24 hours per day to 16 hours per day at four of its air branches. As of August 1998, Miami, FL; Tucson, AZ; and San Angelo, TX; are the only 3 of the 10 air branches that provide 24-hours-per-day coverage. Customs officials told us that the branches work together as a means to compensate, in part, for the reduced coverage each branch provides. Miami air branch officials told us their branch works with the other branches to provide coverage when needed. In addition, Customs officials told us they ended 24-hour maintenance shifts at all the air branches and that only one maintenance crew is available during the day at each air branch. As shown in table 1, the total number of aircraft mission takeoffs decreased from about 22,000 in fiscal year 1992 to about 15,000 in fiscal year 1997. The number of times an aircraft did not take off after originally being requested to do so, increased from 1,013 in fiscal year 1992 to 2,076 in fiscal year 1997. This translates into a reduction from a 96 percent take-off rate in fiscal year 1992 to an 88 percent take-off rate in fiscal year 1997. Although the take-off rate decreased by 8 percent from fiscal year 1992 to fiscal year 1997, the actual number of cancelled takeoffs more than doubled. We analyzed the cancelled takeoffs for fiscal years 1992 and 1997 as shown in table 2. Most of the increase in the number of cancelled takeoffs was attributable to reasons that did not depend on resources, such as missions being cancelled or postponed by the law enforcement officials originally requesting the flight. However, other cancellations occurred because Customs Aviation Program resources, such as the appropriate aircraft or aircrew for the mission, were not available. For example, in October 1996, the California Riverside Aviation unit near the DAICC was requested to provide backup aviation support to the State Narcotics Task Force on a surveillance mission. However, this support could not be provided by the unit because the Cessna 210 aircraft or aircrew was not available; therefore, the case agent cancelled the backup request. In April 1997, several cancellations occurred because the Miami air branch did not have an aviation interdiction officer available for radar patrol. As shown in figure 4, the Customs Aviation Program’s number of authorized personnel decreased by 11 percent between fiscal years 1992 and 1997, from 960 to 854. Also, the program’s number of actual personnel decreased by 22 percent, from 956 to 745. According to Customs officials, the aviation program lost personnel due to budget reductions, a hiring freeze in fiscal years 1993 through 1996, and attrition due to hiring of Customs Aviation Program pilots by commercial airlines. During this time, an average of about three people per month left the aviation program. In fiscal year 1997, the hiring freeze ended and the aviation program began hiring personnel. In fiscal year 1992, Customs implemented a new strategic plan to carry out its aviation program. The plan called for an authorized personnel level of 960, and the program received funding in fiscal year 1992 for this personnel level. However, program officials said that the plan could not be carried out fully because foreign counterdrug operations were added as a principal mission in fiscal year 1994, and the budget was reduced in fiscal year 1995. Table 3 shows the total number of aircraft operated by the Customs Aviation Program. The number of aircraft declined about 10 percent between fiscal years 1992 and 1997. Customs officials said that during fiscal years 1993 and 1994, the number of fixed-wing aircraft decreased from 61 to 38 due to budget reductions. In addition, officials said that as of August 1998, they were unable to operate all of their aircraft because of insufficient funding. For example, four additional high-speed Blackhawk helicopters were being kept in storage because of the high costs of operation. (See figure 5 for pictures of selected aircraft.) Operations and maintenance costs per aircraft flight hour have increased over the last several years. For example, the cost per flight hour in real dollars to operate a P-3 increased from $2,979 in 1994 to $3,687 in 1997, for a Blackhawk helicopter the cost increased from $2,419 to $3,859, and for the Citation II it increased from $1,070 to $1,885. Customs officials said increased costs was one of the reasons they were flying fewer hours per year. The other primary reasons were that trained pilots and other aircrew members were being dedicated to other missions or that aircraft were unavailable because they have been dedicated to another mission or were undergoing extended maintenance. Customs currently is developing performance measures to more adequately report the results for its aviation program. The Customs Aviation Program uses measures such as seizures and the number of suspect aircraft detected to gauge the results of its efforts. For example, in fiscal year 1997, Customs reported seizing about 22,900 pounds of cocaine and about 9,100 pounds of marijuana. In addition, for their foreign counterdrug operations, Customs reported a track rate of 57 percent in the transit zone. The track rate is the percentage of suspected narcotics trafficking aircraft that were detected and tracked by Customs P-3 aircraft and which were transferred to interdiction or apprehension forces or tracked to the landing and delivery site in the transit zone. However, these performance measures track activity, not results or effectiveness. Several Customs Aviation Program officials, for example, made this point by noting that it is unclear whether an increase in seizures indicates that Customs has become more effective or that the amount of drug smuggling has increased. We have previously reported that traditional measures, such as the number of seizures, pose problems for measuring the performance of drug interdiction programs. We have also recognized that developing sound, results-oriented performance measures and accompanying data is still a difficult and time-consuming task. Customs has also used other measures, such as an air-threat index, in an attempt to measure the results of its aviation program. The air-threat index used various indicators, such as the number of stolen and/or seized aircraft, to determine the potential threat of air drug smuggling. However, the air-threat index, as well as selected other performance measures, have been discontinued because Customs determined they were not good measures of results and effectiveness. For example, the aircraft seizures indicator took into account only those seizures in which the aircraft was seized, eliminating those events related to smuggling where drugs were seized but for one reason or another, the aircraft was not seized. Customs Aviation Program officials said that, given their limited success with earlier efforts to measure program results, Customs is currently revising its performance measures. Customs Aviation Program officials told us that one of the primary obstacles to developing meaningful performance measures is that much of the program’s success depends on the actions of other federal departments and state and local law enforcement agencies, as well as the cooperation of foreign government law enforcement agencies. The officials said the measures they are developing also need to be more consistent with GPRA, which seeks to shift the focus of federal management and decisionmaking away from concentrating on the activities performed to a focus on the results of those activities that are undertaken. Consequently, Customs is developing a performance measure that quantifies the increase in the cost of doing business for a drug smuggler as a result of Customs Aviation Program activity. Customs is also now developing a performance measure to judge the change in a drug smugger’s behavior. This would be an assessment of Customs’ success in forcing the drug trafficker to change the routes and/or methods used for smuggling drugs into the U.S. Customs officials said that these new measures will be part of their fiscal year 2000 budget request. We provided a draft of this report for comment to the Secretary of the Treasury and the Commissioner of Customs. On August 6, 1998, we met with the Acting Executive Director of the Customs Aviation Program and members of his staff who provided oral comments for Treasury and Customs. These officials concurred with our draft report and provided some technical comments, which we incorporated where appropriate. As agreed with your staff, unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days from the date of this letter. At that time, we will send copies of this report to the Ranking Minority Member of your Subcommittee, the Chairmen and Ranking Minority Members of other congressional committees with jurisdiction over the Customs Service, the Secretary of the Treasury, and the Commissioner of Customs. We will also make copies available to others upon request. The major contributors to this report are listed in appendix III. If you or your staff have any questions on this report, please call me on (202) 512-8777. Drug smugglers used private aircraft and established unchallenged smuggling routes along the entire U.S. southern border. Aviation program was established and its principal mission was border interdiction. Smuggling threat shifted from the southern border to the Gulf of Mexico and south Florida. Command, Control, Communications, and Intelligence Center West became operational. Counterdrug operations began in Mexico with two Citations. Customs National Aviation Center, the program's operational headquarters, was established in Oklahoma City. Foreign counterdrug operations in South America began. Overall program funding and personnel decreased. Other law enforcement support accounted for about half of the aviation program's flight hour activities. PDD-14 established a new framework for international drug control efforts. Hiring freeze in effect. Flight hours shifted from border interdiction to foreign counterdrug operations and other law enforcement support. Increased number of P-3 and Citation aircraft were dedicated to the program's South American operations. Twenty-four-hour maintenance of aircraft ended at all branches. Domestic border interdiction response was reduced from 24 hours per day to 16 hours per day at four air branches. Aviation program was developing Government Performance and Results Act (GPRA) measures that program officials say will more accurately measure effectiveness. Jan Montgomery, Assistant General Counsel The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Customs Service's Customs Aviation Program, focusing on the: (1) program's missions and how they have changed since fiscal year (FY) 1992; (2) annual level of resources and activities since FY 1992; and (3) adequacy of the performance measures Customs uses to judge the results of its aviation program. GAO noted that: (1) since the establishment of the Customs Aviation Program in 1969, its basic mandate to use air assets to counter the drug smuggling threat has not changed; (2) originally, the Customs Aviation Program had two principal missions: (a) border interdiction of drugs being smuggled by plane into the United States; and (b) law enforcement support to other Customs offices as well as other federal, state, and local law enforcement agencies; (3) in 1993, President Bill Clinton instituted a new policy to control drugs coming from South and Central America; (4) because Customs aircraft were to be used to help carry out this policy, foreign counterdrug operations became a third principal mission for the aviation program; (5) since then, the program has devoted about 25 percent of its resources to the border interdiction mission, 25 percent to foreign counterdrug operations, and 50 percent to other law enforcement support; (6) Customs Aviation Program funding decreased from about $195 million in FY 1992, to about $135 million in FY 1997--about 31 percent in constant or inflation-adjusted dollars; (7) while available funds have decreased, operations and maintenance costs per aircraft flight hour have increased; (8) Customs Aviation Program officials said that this increase in costs is one of the reasons they are flying fewer hours each year; (9) from FY 1993 to FY 1997, the total number of flight hours for all missions decreased by over one-third, from about 45,000 hours to about 29,000 hours; (10) the size of Customs' fleet dropped in FY 1994, when Customs took 19 surveillance aircraft out of service because of funding reductions; and the fleet has remained at about 115 since then; (11) the number of Customs Aviation Program onboard personnel has dropped steadily, from a high of 956 in FY 1992 to 745 by the end of FY 1997; (12) Customs has been using traditional law enforcement performance measures for the aviation program; (13) these measures, however, are used to track activity, not results or effectiveness; (14) until 1997, Customs also used an air threat index as an indicator of its effectiveness in detecting illegal air traffic; (15) however, Customs has discontinued using this indicator, as well as selected other performance measures, because Customs determined that they were not good measures of results and effectiveness; and (16) recognizing that these measures were not providing adequate insights into whether the program was producing desired results, Customs is developing new performance measures in order to better measure results.
Ally Financial is one of the country’s largest financial holding companies, with total assets of $148.5 billion as of March 31, 2014. Its primary line of business is automotive financing—both consumer financing and leasing and dealer floor-plan financing. Ally Financial (when it was known as GMAC) formerly served as General Motors Company’s (GM) captive automotive finance company. GMAC’s subsidiaries offered financial services such as auto insurance and residential mortgages. In 2006, Cerberus Capital Management purchased 51 percent of the company (GM retained 49 percent). As the housing market declined in the late 2000s, the previously profitable GMAC mortgage business unit began producing significant losses. For example, the company’s Residential Capital LLC (ResCap) subsidiary lost approximately $17 billion from 2007 through 2009. During the same period, U.S. automobile sales dropped from 16.4 million to 10.4 million cars and light trucks, negatively affecting the company’s core automobile financing business. On May 14, 2012, ResCap and certain of its wholly owned direct and indirect subsidiaries filed voluntary petitions for relief under Chapter 11 of the Bankruptcy Code in the U.S. Bankruptcy Court for the Southern District of New York (Bankruptcy Court). The bankruptcy created uncertainties about Ally Financial’s financial obligations. As a financial holding company, Ally Financial is regulated and supervised by the Federal Reserve. Under the Dodd-Frank Act and implementing regulations, the Federal Reserve conducts an annual supervisory stress test of bank holding companies with $50 billion or more in total consolidated assets to evaluate whether the companies have sufficient capital to absorb losses resulting from adverse economic conditions. For the stress tests, the Federal Reserve projects revenue, expenses, losses, and resulting post-test capital levels, and regulatory capital ratios, including the tier 1 capital ratio and the tier 1 common ratio, under three economic scenarios (baseline, adverse, and severely adverse).holding companies to conduct an annual company-run stress test using the same macroeconomic scenarios that the Federal Reserve uses to conduct its supervisory stress test. In addition, the Federal Reserve requires the same bank The Federal Reserve also conducts an annual exercise, CCAR, to help ensure that large bank holding companies have robust, forward-looking capital planning processes that take into account their unique risks and set aside sufficient capital to operate during periods of economic and financial stress. The Federal Reserve evaluates capital adequacy; internal processes for assessing capital adequacy; plans for capital distributions, such as dividend payments or stock repurchases; and other actions that affect capital. The Federal Reserve may object to a capital plan because of significant deficiencies in the planning process or because one or more capital ratios would fall below required levels under the assumption of stress and planned distributions. If the Federal Reserve objects to a proposed capital plan, the bank holding company is permitted to make capital distributions only if the Federal Reserve indicates in writing that it does not object. The company also must resubmit the capital plan after remediating the deficiencies. In March 2013, the Federal Reserve reported the results of its 2013 supervisory stress test and of the CCAR exercise. The Federal Reserve found that Ally Financial’s tier 1 common capital ratio fell below the required 5 percent under the severely adverse scenario. Ally Financial was the only one of the 18 bank holding companies tested that fell below this required level. The Federal Reserve objected to Ally Financial’s capital plan during the 2013 CCAR. According to the Federal Reserve, Ally Financial’s capital ratios did not meet the required minimums under the proposed capital plan. Specifically, the Federal Reserve reported that under stress conditions, Ally Financial’s plan resulted in a tier 1 common ratio of 1.52 percent, which is below the required level of 5 percent under the capital plan rule. According to the Federal Reserve CCAR results paper, these results assumed that Ally Financial remained subject to contingent liabilities associated with ResCap. The Federal Reserve required Ally Financial to resubmit its capital plan, which Ally Financial did in September 2013. Ally Financial owns Ally Bank, an Internet- and telephone-based bank. Ally Bank is a state-chartered nonmember bank supervised by FDIC and the Utah Department of Financial Institutions. Ally Bank had more than $55.9 billion in total deposits as of March 31, 2014. To help stabilize the automotive industry and avoid further economic disruptions, Treasury disbursed $79.7 billion through AIFP from December 2008 through June 2009. The assistance was used to support two automakers, Chrysler and GM, and their automotive finance companies, Chrysler Financial and GMAC.outlined guiding principles for the investments, including In July 2009, Treasury exiting its investments as soon as practicable in a timely and orderly manner that minimizes financial market and economic impact; protecting taxpayer investment and maximizing overall investment returns within competing constraints; improving the strength and viability of GM and Chrysler so that they could contribute to economic growth and jobs without government involvement; and managing its ownership stake in a hands-off, commercial manner, including voting its shares only on core governance issues, such as the selection of a company’s board of directors and major corporation events or transactions. In late December 2008, as a part of AIFP, Treasury agreed to purchase $5 billion in senior preferred equity from GMAC and received an additional $250 million in preferred shares through warrants that Treasury exercised immediately. Treasury subsequently provided GMAC with additional assistance through TARP. In May 2009, Treasury purchased $7.5 billion of mandatory convertible preferred shares from GMAC. Also, in May 2009, Treasury exercised its option to exchange an $884 million loan to GM for a 35.4 percent common ownership share in GMAC. In December 2009, Treasury made additional investments in Ally Financial—$2.5 billion of trust preferred securities and approximately $1.3 billion of mandatory convertible preferred shares.December 2009, Treasury converted $3 billion of existing mandatory convertible preferred shares into common stock, increasing its common equity ownership from 35 to 56.3 percent. In December 2010, Treasury converted $5.5 billion of existing mandatory convertible preferred shares into common stock, increasing its common equity ownership to approximately 74 percent of Ally Financial. Ally Financial announced a plan in 2012 to repurchase Treasury’s mandatory convertible preferred shares, worth $5.9 billion, to reduce Treasury’s investment in the company. However, this plan stalled after the Federal Reserve objected to Ally Financial’s initial 2013 capital plan submission, partly because of uncertainty about the company’s obligations associated with the ResCap bankruptcy. Two key regulatory and legal developments allowed Ally Financial and Treasury to move ahead with plans to reduce Treasury’s investments in the company in late 2013. First, in November 2013, the Federal Reserve did not object to Ally Financial’s resubmitted capital plan. Second, in December 2013, the bankruptcy proceedings of Ally Financial’s mortgage subsidiary, ResCap, were substantially resolved. Following the resolution of these issues, Treasury significantly reduced its ownership stake in Ally Financial—primarily through sales of common stock—from 74 to 16 percent as of June 30, 2014. Also as of June 30, 2014, Treasury had received $17.8 billion (including interest and dividends), which exceeds the total Treasury assistance to the company of $17.2 billion. Two key regulatory and legal developments in the second half of 2013 helped Treasury accelerate the wind-down of its investments in Ally Financial. Federal Reserve did not object to Ally Financial’s resubmitted capital plan: In November 2013 Ally Financial received a “nonobjection” from the Federal Reserve to its resubmitted 2013 CCAR capital plan, which enabled Ally Financial to move forward on its repurchase of $5.9 billion of the remaining Treasury-owned mandatory convertible preferred shares. As we previously reported, Treasury and Ally Financial agreed in August 2013 that Ally would repurchase the mandatory convertible preferred shares, conditioned on receiving a nonobjection on the resubmitted capital plan and the closing of a private placement securities transaction. Ally Financial resubmitted its plan in September 2013 and the Federal Reserve approved it on November 15, 2013. The Federal Reserve nonobjection enabled Ally Financial to complete the private placement of common shares valued at $1.3 billion announced in August 2013. The private placement, intended in part to help finance the repurchase of the $5.9 billion remaining Treasury-owned mandatory convertible preferred shares, was completed in November 2013, as was the repurchase of the Treasury shares. More recently, Ally Financial received a nonobjection from the Federal Reserve in March 2014 on its annual capital plan. Completion of the ResCap bankruptcy: In December 2013, the bankruptcy of Ally Financial’s ResCap subsidiary was substantially resolved. The Bankruptcy Court entered an order confirming a bankruptcy plan on December 11, 2013, which became effective on December 17, 2013. The final bankruptcy agreement included a settlement, which the bankruptcy court judge had approved in June 2013, releasing Ally Financial from any and all legal claims by ResCap and, subject to certain exceptions, all other third parties, in exchange for $2.1 billion in cash from Ally Financial and its insurers. According to Ally Financial, its mortgage operations were a significant portion of its operations and were conducted primarily through ResCap. With the completion of the ResCap settlement, Ally Financial largely exited the mortgage origination and servicing business. Pub. L. No. 84-511, § 4(a)(2), 70 Stat. 133, 135 (codified at 12 U.S.C. § 1843(a)(2)). Ally Financial settled allegations of violations of the Equal Credit Opportunity Act by paying $98 million relating to the execution of consent orders issued by the Department of Justice and the Consumer Financial Protection Bureau. After the legal and regulatory developments in late 2013, the pace of Treasury’s reduction in its ownership share of Ally Financial accelerated. From December 2013 through June 2014, Treasury reduced its ownership share of Ally Financial by almost 80 percent (see fig. 1). In November 2013, Ally Financial made cash payments totaling $5.9 billion to repurchase all remaining mandatory convertible preferred shares outstanding and terminate an existing share adjustment provision.Additionally, Ally Financial issued $1.3 billion of common equity to third- party investors, reducing Treasury’s ownership share from 74 to 63 percent. In January 2014, Treasury completed a private placement of Ally Financial common stock valued at approximately $3 billion, further reducing Treasury’s ownership share of Ally Financial to 37 percent. According to Treasury, the decision to undertake a private placement at that time was based on market conditions, as well as information Treasury received about increasing investor interest from the underwriter of two previous private placements of Ally Financial shares—the $1.3 billion private placement Ally Financial completed in November 2013 and an approximate $900 million private offering by GM of its remaining Ally Financial stock in December 2013. These transactions contributed to building an investor base for the stock, according to Treasury and Ally Financial. Treasury said the positive results of the March 2014 Federal Reserve stress test and CCAR contributed to the decision to further reduce its ownership share. The day after the release of the CCAR results in March 2014, Treasury announced that it would sell Ally Financial common stock in an initial public offering (IPO) and in April 2014, completed the IPO of 95 million Treasury shares at $25 per share. The $2.4 billion sale reduced Treasury’s ownership share to approximately 17 percent. Following the IPO, Ally Financial became a publicly held company. In May 2014, Treasury received $181 million from the sale of additional shares after underwriters exercised the option to purchase an additional 7 million shares from Treasury at the IPO price. This additional sale reduced Treasury’s ownership share to approximately 16 percent. As of June 30, 2014, Treasury had received $17.8 billion in sales proceeds and interest and dividend payments on its total assistance to Ally Financial of $17.2 billion. Based on the stock prices, as of June 30, 2014, Treasury’s remaining investment in Ally Financial, which consists of common stock, was valued at almost $1.8 billion. Treasury stated that it would like to divest its ownership stake in Ally Financial in a manner that balances the speed of recovery with maximizing returns for taxpayers. Treasury officials told us that Treasury does not have a specific date by which it intends to fully divest from the company, but that its decision on timing will be based on market conditions. These market conditions, in part, will reflect Ally Financial’s financial performance. Since 2013, Ally Financial has continued its evolution into a publicly held, monoline finance company in the automotive sector with an Internet bank. Ally Financial’s financial condition continued to stabilize in late 2013 and early 2014 and the company raised significant levels of common equity through private and public share offerings. According to recent rating agency analyses, Ally Financial is competitive in automotive financing, particularly in the floor-plan business segment, but faces potential competitive challenges, such as its reliance on GM and Chrysler auto financing relationships. Ally Financial’s business structure has been simplified and clarified over the past year, according to rating agency analyses and federal regulatory officials. Specifically, the completion of the ResCap bankruptcy marked the company’s exit from the mortgage origination and servicing business. Ally Financial became a financial holding company in December 2013, which, according to the company, enabled it to retain its insurance and auction lines of business and maintain its full suite of products for dealers. Ally Financial also completed sales of its European and Latin American automotive finance operations to GM Financial, GM’s captive financing Ally company, and its Canadian operations to Royal Bank of Canada.Financial expects to complete GM Financial’s acquisition of its remaining international operation—its China joint venture, in which the company is a 40 percent owner—in 2014, subject to government approvals in China. Since our last review in 2013, Ally Financial’s financial performance has continued to stabilize as illustrated by multiple capital, profitability, and liquidity measures. Taking into account the resolution of ResCap, the sale of international operations, and other factors, the three largest credit rating agencies upgraded Ally Financial’s ratings, although the ratings remain below investment grade. Ally Financial’s capital position has remained the same or improved since 2009—the year it became subject to regulatory and reporting requirements following its conversion to a bank holding company in December 2008. Capital can be measured in several ways, but we focused on tier 1 capital because it is currently the strongest form of capital (see table 1). We examined Ally Financial’s tier 1 capital ratio and tier 1 leverage ratio and compared them to minimums required under the Federal Reserve’s capital adequacy guidelines for bank holding companies. We also examined Ally Financial’s tier 1 common ratio. The Federal Reserve has long held the view that bank holding companies generally should operate with capital positions well above the minimum regulatory capital ratios, with the amount of capital held commensurate with a bank holding company’s risk profile. Tier 1 capital and tier 1 common capital ratios: Higher tier 1 capital and common capital ratios may indicate that a bank holding company is in a better position to absorb financial losses. A tier 1 capital ratio measures tier 1 capital as a percentage of risk-weighted assets. As shown in table 1, Ally Financial’s tier 1 capital ratio increased from 2009 to 2010 but has declined slightly since 2011. Federal Reserve Capital Adequacy Guidelines require bank holding companies to have a tier 1 risk-based capital ratio of at least 4 percent. Ally Financial’s tier 1 capital ratio exceeded the required minimum each year from 2009 through 2013. In 2013, Ally Financial reported that the tier 1 capital ratio declined, in part, because of the repurchase of Treasury’s mandatory convertible preferred shares, which qualified as tier 1 capital. A tier 1 common capital ratio measures common capital—that is, the common equity component of tier 1 capital as a share of risk- weighted assets. Ally Financial’s tier 1 common ratio has increased from 4.85 percent at the end of 2009 to 8.84 percent at the end of 2013. Tier 1 leverage ratio: A tier 1 leverage ratio shows the relationship between a banking organization’s core capital and total assets. The tier 1 leverage ratio is calculated by dividing the tier 1 capital by the firm’s average total consolidated assets. Generally, a larger tier 1 leverage ratio indicates that a company is less risky because it has more equity to absorb losses in the value of its assets. As shown in table 1, Ally Financial’s leverage ratio has been reduced by 20 percent since 2009 but remains well above the regulatory minimum guideline of 3 or 4 percent, depending on the bank holding company’s composite rating. 12 C.F.R. 225, Appendix D, § II. profitability, including net income (loss), net interest spread, return on assets, and nonperforming asset ratio. Net income (loss): Ally Financial suffered a net loss in 2009 of $10 billion, but has reported net income for 4 of the last 5 years. As shown in figure 2, the 2009 loss was driven by substantial losses in its mortgage business operating unit. Ally Financial reported net income of $361 million in 2013, down from net income of $1.2 billion in 2012. The company attributed the decline to circumstances including a tax valuation adjustment of $1 billion in 2012; the 2013 payment of $1.4 billion as part of the ResCap settlement agreement; and the $98 million payment in connection with the Department of Justice and Consumer Financial Protection Bureau consent orders. Net interest spread: The net interest spread is the difference between the average rate on total interest-earning assets and the average rate on total interest-bearing liabilities, excluding discontinued operations for the period. In general, the larger the spread, the more a company is earning. Ally Financial’s net interest spread increased from a reported 0.31 percent at the end of 2009 to 1.75 percent at the end of 2013, meaning that Ally Financial is earning more interest on its assets than it is paying interest on its liabilities (see table 2). Return on assets (ROA): ROA is calculated by dividing a company’s net income by its total assets. It is an indication of how profitable a company is relative to its total assets and gives an idea of management’s efficiency in using its assets to generate earnings. A higher ROA suggests that a company is using its assets efficiently. Ally Financial reported improved ROA from 2009 to 2013, with a reported negative 5.81 percent ROA for 2009 and a positive 0.23 percent in 2013. Nonperforming asset ratio: This ratio measures asset quality by dividing the value of nonperforming assets by the value of total assets. The lower the ratio, the fewer poorly performing assets a company holds. Ally Financial’s nonperforming asset ratio fell from 4.36 percent in 2009 to 1.19 percent in 2013 (see table 2). Ally Financial’s liquidity position generally has stabilized since 2009. To examine Ally Financial’s liquidity position, we examined the company’s total liquidity ratio, bank deposits, and operating cash flow. Total liquidity ratio: Liquidity ratios measure a bank’s total liquid assets against its total liabilities. Generally, the ratios indicate a bank’s ability to sell assets quickly to cover short-term debts—with a higher ratio providing a larger margin of safety. Overall, Ally Financial’s liquidity ratio remained fairly stable from the third quarter of 2009 through the fourth quarter of 2013 (see fig. 3). Declines in liquidity levels in 2012 and 2013 were associated with repayments of government assistance. For example, according to Ally Financial, the decline in liquidity in 2013 was due to the repurchase of the Treasury mandatory convertible preferred shares and the redemption of certain high-coupon, callable debt. For the quarter ending March 31, 2014, Ally Financial reported a total liquidity ratio of 16.01 percent. Bank deposits: Bank deposits are the funds that consumers and businesses place with a bank, and growth in deposits is an important factor in the bank’s liquidity position. From December 2008 to March 2014, deposits at Ally Bank, Ally Financial’s Internet bank, grew almost 190 percent, from $19.3 billion to $55.9 billion, of which approximately $45.2 billion were retail (consumer) deposits. Deposits accounted for 43 percent of Ally Financial’s total funding as of the first quarter of 2014, providing the company with a low-cost source of funding that is less sensitive to interest rate changes and market volatility than other sources of funding. Operating cash flow: From the first quarter of 2010 through the third quarter of 2013, Ally Financial generated positive cash flow from operating activities (see fig. 4). Since the third quarter of 2013, cash flows have varied, with Ally reporting negative cash flow in the fourth quarter of 2013 and positive cash flow at the end of the first quarter of 2014. According to Ally Financial, 2013 declines in operating cash flow (compared with the prior year) were driven by the settlement of derivative transactions, but were partially offset by sales and repayments of mortgage and automotive loans. Ally Financial’s changing financial condition is reflected in its credit rating. Although Ally’s credit rating remains below investment grade, its long- term credit rating with the three largest credit rating agencies has been upgraded multiple times since 2009. Most recently, Ally’s long-term ratings with Moody’s, Standard and Poor’s, and Fitch Ratings were upgraded to Ba3, BB, and BB+, respectively. According to rating agency analyses, Ally Financial is a strong competitor in automotive financing, although the company faces competitive challenges. Analysts have said that Ally Financial is competitive in automotive financing, particularly in the floor-plan business segment. In addition, as mentioned previously in this report, Ally Bank has continued to increase its level of retail deposits. Analysts have pointed to potential competitive challenges for Ally Financial, such as its reliance on GM and Chrysler automotive financing relationships. As we previously reported, GM and Chrysler have established captive financing units. Ally’s exclusive lending relationships with GM and Chrysler have ended as the two automakers have begun to rely on their captive financing units. For example, the agreement between GM and Ally Financial on dealer and consumer lending was revised in early 2014. Among other changes, Ally Financial no longer enjoys The captive exclusivity with regard to GM lending arrangements.financing units of GM and Chrysler have begun to increase their financing activities. However, according to Ally Financial representatives, the company is the only automotive finance company that offers a suite of products to dealers (financing, insurance, and auction services) and as a result, the company expects to continue to be competitive in this segment. Moreover, Ally Financial representatives told us that the company has been focusing more on increasing profitability than on market share—consistent with its goals as a publicly held company, which include maximizing return to shareholders. Company representatives also have said in public statements that Ally Financial has been focusing on reducing its noninterest expenses and lowering its cost of funds. Ally Financial also faces competition from other large bank holding companies in consumer automobile financing. We compared the amount of Ally Financial consumer automobile financing with that of four large bank holding companies (Bank of America Corporation, Capital One Financial Corporation, JPMorgan Chase & Company, and Wells Fargo & Company) that reported consumer automobile loans. These data do not include all types of automobile financing, such as automobile leasing and dealer financing, but only retail consumer automobile loans for the time period.lending exceeded that of Ally Financial (see fig 5). The dollar amount of consumer automobile loans that Wells Fargo, JPMorgan Chase, and Capital One made increased from March 2011 through March 2014, while the dollar amount of Ally Financial financing has declined since the fourth quarter of 2012. According to Federal Reserve officials, this decline likely reflects the sale of the international automotive finance operations. We provided a draft of this report to FDIC, the Federal Reserve, and Treasury for their review and comment. In addition, we provided a copy of the draft report to Ally Financial to help ensure the accuracy of our report. Treasury provided written comments that are reprinted in appendix II. Ally Financial provided technical comments, which we have incorporated, as appropriate. FDIC and the Federal Reserve did not provide comments. In its written comments, Treasury generally concurred with our findings. Treasury noted that approximately $17.8 billion has been recovered to date from Ally Financial through repayments, a private placement, and an initial public offering. Treasury also noted that it will unwind its remaining ownership stake in a way that balances the speed of recovery with maximizing returns to taxpayers. We are sending copies of this report to FDIC, the Federal Reserve, and Treasury, and the appropriate congressional committees. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or clowersa@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. This report is based on our continuing analysis and monitoring of the Department of the Treasury’s (Treasury) activities in implementing the Emergency Economic Stabilization Act of 2008 (EESA), which provided us with broad oversight authorities for actions taken under the Troubled Asset Relief Program (TARP). This report examines (1) the status of Treasury’s investments in Ally Financial Inc. (Ally Financial) as of June 30, 2014, and its efforts to wind down those investments; and (2) the financial condition of Ally Financial through March 31, 2014. To examine the status of Treasury’s investments, we reviewed TARP reports, which included monthly reports to Congress and daily TARP updates regarding the Automotive Industry Financing Program (AIFP) program data. Using the AIFP program data, we analyzed Treasury’s equity ownership and recovery of funds in Ally Financial for the time period from January 2009 through June 2014. We have previously assessed the reliability of the AIFP program data from Treasury. For example, we tested the Office of Financial Stability’s internal controls over financial reporting as they related to our annual audit of the office’s financial statements and found the information to be sufficiently reliable based on the results of our audit of the TARP financial statements for fiscal years 2009—2013. AIFP was included in these financial audits. addition, for this review, we reviewed the data for completeness and obvious errors such as outliers. Based on this review, we determined that the data were sufficiently reliable for our purposes. EESA, which was signed into law on October 3, 2008, established the Office of Financial Stability within Treasury and provided it with broad, flexible authorities to buy or guarantee troubled mortgage-related assets or any other financial instruments necessary to stabilize the financial markets. § 101(a), 122 Stat. at 3767 (codified at 12 U.S.C. § 5211(a)). Review 2014: Assessment Framework and Results.interviewed officials from Treasury, the Federal Reserve, the Federal Deposit Insurance Corporation (FDIC), and representatives from Ally Financial. To assess the financial condition of Ally Financial, we measured the institution’s capital ratios, net income, net interest spread margin, return on assets, nonperforming asset ratio, liquidity ratio, bank deposits, and operating cash flow, generally from 2009 through the first quarter (March 31) of 2014. We obtained these data from SNL Financial, a provider of financial information. We have determined that SNL Financial data are sufficiently reliable for past reports, and we reviewed past GAO data reliability assessments to ensure that we, in all material respects, used the data in a similar manner and for similar purposes. We also reviewed reports by several credit rating agencies on how they rate Ally Financial’s financial strength. Although we have reported on actions needed to improve the oversight of rating agencies, we included these ratings because the ratings are widely used by Ally Financial, Treasury, and market participants. To obtain information on the financial ratios and indicators used in the analyses of Ally Financial’s financial condition, we reviewed relevant documentation and interviewed officials from FDIC, the Federal Reserve, Treasury, and representatives from Ally Financial. For the comparison of retail (consumer) automotive lending for five large bank holding companies, including Ally Financial, we used Federal Reserve regulatory filings (Form FR-Y9C). For each data source we reviewed the data for completeness and obvious errors and determined that these data were sufficiently reliable for our purposes. We conducted this performance audit from March 2014 to August 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Karen Tremba (Assistant Director), Catherine Gelb (Analyst-in-Charge), Bethany Benitez, William Chatlos, Risto Laboski, Terence Lam, Barbara Roesmann, and Jena Sinkfield made significant contributions to this report.
As part of its Automotive Industry Financing Program, funded through the Troubled Asset Relief Program (TARP), Treasury provided $17.2 billion of assistance to Ally Financial (formerly known as GMAC). Ally Financial is a large financial holding company, the primary business of which is auto financing. TARP's authorizing legislation mandates that GAO report every 60 days on TARP activities. This report examines (1) the status of Treasury's investments in Ally Financial and its efforts to wind down those investments and (2) the financial condition of Ally Financial. To address these issues, GAO reviewed and analyzed available industry, financial, and regulatory data from 2009 through June 2014. GAO also reviewed rating agency analyses, Treasury reports and documentation detailing Treasury's investments in Ally Financial and its divestments from the company, as well as Ally Financial's financial filings and reports. GAO also interviewed officials from the Federal Deposit Insurance Corporation (FDIC), Federal Reserve, and Treasury, and representatives from Ally Financial. GAO provided a draft of this report to FDIC, the Federal Reserve, Treasury, and Ally Financial. Treasury generally concurred with GAO's findings. Ally Financial provided technical comments, which GAO has incorporated, as appropriate. FDIC and the Federal Reserve did not provide comments. GAO makes no recommendations in this report. The Department of the Treasury (Treasury) reduced its ownership stake in Ally Financial Inc. (Ally Financial) from 74 percent in October 2013, to 16 percent as of June 30, 2014. As shown in the figure below, the pace of Treasury's reduction in its ownership share of Ally Financial accelerated in 2013 and corresponds with two key events. First, in November 2013, the Board of Governors of the Federal Reserve System (Federal Reserve) did not object to Ally Financial's resubmitted 2013 capital plan, which allowed Ally Financial to repurchase preferred shares from Treasury and complete a private placement of common shares. Second, in December 2013 the bankruptcy proceedings of Ally Financial's mortgage subsidiary, Residential Capital LLC (ResCap), were substantially resolved. The confirmed Chapter 11 plan broadly released Ally Financial from any and all legal claims by ResCap and, subject to certain exceptions, all other third parties, in exchange for $2.1 billion in cash from Ally Financial and its insurers. As of June 30, 2014, Treasury had received $17.8 billion in sales proceeds and interest and dividend payments on its total assistance to Ally Financial of $17.2 billion. Ally Financial's financial condition has continued to stabilize in late 2013 and early 2014 as illustrated by multiple capital, profitability, and liquidity measures. For example, Ally Financial's capital ratios have remained above regulatory minimum levels since 2009, which indicates that it is in a better position to absorb financial losses. In addition, the company raised significant levels of common equity through private and public share offerings. According to recent credit rating agency analyses, Ally Financial is competitive in automotive financing, particularly in the floor-plan business segment, which focuses on dealer financing. However, analysts reported that the company faces potential competitive challenges, such as the loss of certain exclusive relationships with General Motors Company and Chrysler Group LLC.
In response to the creation of TANF, states implemented more work- focused welfare programs, and research shows that these changes—in concert with other policy changes and economic conditions—contributed to raising the incomes of single parent families so that fewer were eligible for cash assistance. In designing and implementing their new TANF programs, states focused more than ever before on helping welfare recipients and other low-income parents find jobs. Many states implemented work-focused programs that stressed moving parents quickly into jobs and structured the benefits to allow more parents to combine welfare and work. States also imposed financial consequences, or sanctions, on families that did not comply with TANF work or other requirements, strengthening the incentives for TANF participants to comply with work requirements. Other concurrent policy changes contributed to an increase in the share of single parents in the labor force. These included an increase in the Earned Income Tax Credit (EITC) in the 1990s and increases in the minimum wage in 1996 and 1997, both of which contributed to an increase in the returns to work. Additional funds for federal and state work supports such as child care also made it easier for single parents to enter the labor force. Finally, the strong economy of the 1990s facilitated the move from welfare to work for many TANF recipients. A decline in the unemployment rate and strong economic growth contributed to the widespread availability of job openings for workers of all skill levels in many parts of the country. During this period, labor force participation increased among single mothers, the population most affected by TANF—from 58 percent in 1995—the year prior to the creation of TANF—to 71 percent in 2007, with most of this increase occurring immediately following the passage of welfare reform. Because the incomes of many single-parent families increased as a result of these policies, in total, 420,000 fewer families had incomes low enough to be eligible for cash assistance in 2005 compared to 1995, according to HHS data. At the same time that some families worked more and had higher incomes, others had income that left them still eligible for TANF cash assistance; however, many of these eligible families were not participating in the program. According to our estimates, the vast majority—87 percent—of the caseload decline can be explained by the decline in eligible families participating in the program, in part because of changes to state welfare programs. (See Fig. 1). These changes include mandatory work requirements, changes to application procedures, lower benefits, and policies such as lifetime limits on assistance, diversion policies, and sanctions for non-compliance, according to a review of the research. While mandatory work activities assisted some participants in getting jobs, according to a research synthesis conducted for HHS, these mandates may have led other families to choose not to apply rather than be expected to fulfill the requirement to work. Other families may have found it more difficult to apply for or continue to participate in the program, especially those with poor mental or physical health or other characteristics that make employment difficult. A decline in average cash benefits may also have contributed to the decline in participation. Average cash benefits under 2005 TANF rules were 17 percent lower than they were under 1995 AFDC rules, according to our TRIM3 estimates, as cash benefit levels in many states have not been updated or kept pace with inflation. Research also suggests that in response to lifetime limits on the amount of time a family can receive cash assistance eligible families may hold off on applying for cash assistance and “bank” their time, a practice that could contribute to the decline in families’ use of cash assistance. In addition, fewer families may have applied or completed applications for TANF cash assistance because of state policies and practices for diverting applicants from cash assistance; nearly all states have at least one type of diversion strategy, such as the use of one-time nonrecurring benefits instead of monthly cash assistance. Finally, some studies and researchers noted that full sanctions for families noncompliance—those that cut off all benefits for a period of time—are associated with declines in the number of families receiving cash assistance, although more research is needed to validate this association. During the recent economic recession, caseloads increased in some states but decreased in others, as circumstances in individual states as well as states’ responses to the economic conditions varied. Between December 2007 and September 2009, 37 states had increases in the number of families receiving TANF cash assistance while 13 states had decreases. However, the degree of change in families receiving TANF cash assistance varied significantly by state, as some states experienced caseload increases or decreases of over 25 percent while others experienced minimal changes of 0 to 5 percent. Nationwide, the total number of families receiving TANF cash assistance increased by 6 percent during this time period although the subset of two-parent families receiving such assistance increased by 57 percent. Initially few states reported reducing TANF-related spending on family and/or work supports in response to the recession, instead using funding sources such as the Emergency Contingency Fund created by the Recovery Act to respond to rising caseloads and/or to establish or expand subsidized employment programs. However, through their comments on our national survey and during our site visits, state officials discussed how the economic recession has caused changes to local TANF service delivery in some states. A majority of state TANF officials nationwide, as well as TANF officials from all eight localities we visited, reported that they made changes in local offices’ TANF service delivery because of the economic recession. Specifically, of the 31 states reporting such changes through our survey, 22 had reduced the number of TANF staff, 11 had reduced work hours at offices, and 7 had reduced the number of offices. Officials in all three states we visited also reported that local TANF caseworkers are now managing an increased number of TANF cash assistance families per person. As a result of these increased caseloads, along with tightened resources, local officials in all three of the states we visited expressed their concerns that staff are less able to provide services to meet TANF cash assistance families’ needs and move them toward self-sufficiency. Research on how families are faring after welfare reform has shown that, like those who receive TANF cash assistance, families that have left welfare, either for work or for other reasons, tend to remain low income and most depend in part on other public benefits. As we noted in a 2005 report, most of the parents who left cash welfare found employment and some were better off than they were on welfare, but earnings were typically low and many worked in unstable, low-wage jobs with few benefits and advancement opportunities. There is evidence that some former TANF recipients have had better outcomes; for example, a 2009 study found that, in general, former TANF recipients in three cities, especially those who had left TANF prior to 2001, had higher employment rates and average income levels than they had while they were on TANF. However, even among working families, many rely on government supports such as the EITC, Medicaid, the Supplemental Nutrition Assistance Program (SNAP), formerly known as the Food Stamp Program, and other programs to help support their families and lift them out of poverty, as most parents who recently left welfare are not earning enough to be self-supporting. In addition, a considerable body of work has documented families who are often described as “disconnected” from the workforce. It is not yet known whether or to what extent the recession has led to an increase in the number of these families. A recent GAO analysis of the characteristics of low-income families several years post-welfare reform found that while families who were receiving TANF cash assistance in 2005 had low incomes, a third worked full-time and most received other public supports, according to the most recent data available. The median household income of families receiving TANF cash assistance was $9,606 per year, not including means-tested benefits. One third of families who received TANF cash assistance at some point during the year (33 percent) were engaged in full-time employment, while 44 percent were headed by an adult without earnings. About a fifth (18 percent) of these families were headed by an adult who had a work- limiting disability. The vast majority of families receiving TANF cash assistance—91 percent—also received at least one other public benefit, with most (88 percent) receiving benefits from the Supplemental Nutritional Assistance Program (SNAP), formerly known as the Food Stamp Program, and a smaller proportion receiving subsidized housing (22 percent), child care subsidized by the Child Care and Development Fund (CCDF) (11 percent), or Supplemental Security Income (SSI), a cash assistance program for low-income people with disabilities (22 percent). Only 16 percent of families receiving cash assistance included married couples, and even fewer—less than 10 percent—had income from an unmarried partner. Many TANF eligible families do not participate in the program, possibly because they left the program or because they did not apply. Our analysis found that on average, these families had higher incomes than TANF recipients, but median incomes remained low, a significant proportion did not work full time, and many received public supports other than TANF. Compared to TANF cash assistance recipients, eligible non-recipients had higher median incomes ($15,000 per year) and higher rates of full-time employment (44 percent). However, a significant proportion of TANF- eligible non-recipient families—41 percent—were headed by an adult without any earnings, and 11 percent had a work limiting disability. A somewhat lower percentage of those eligible but not receiving TANF cash assistance received other public benefits (66 percent received any benefit), but a majority lived in households that received SNAP (59 percent). Receipt of other benefits was also somewhat lower than among TANF recipients, with 13 percent receiving subsidized housing, 8 percent receiving CCDF-subsidized child care, and 18 percent receiving SSI. More eligible non-participating families were headed by married couples than participating families, but no more had income from an unmarried partner. A small subgroup of families eligible for but not receiving TANF cash assistance (732,000 families in 2005), did not work and did not receive SSI benefits and this group has lower incomes than TANF recipients and other eligible non-recipients. In addition, these families also had lower receipt of other public benefits compared to TANF recipients. Among families with no earned income that received neither TANF nor SSI, the median income from all sources was $7,020, an amount equal to about 45 percent of the federal poverty threshold for a family consisting of one adult and two children. Twelve percent of this group of families was headed by a parent who reported having a work-limiting disability. The extent to which these families received other public benefits was similar to that of other families eligible but not participating in TANF, with 66 percent receiving any benefit. Most (63 percent) received SNAP benefits while 18 percent received subsidized housing, and 4 percent received CCDF-subsidized child care. These more disadvantaged non-participants accounted for 11 percent of all families who were eligible for TANF cash assistance in 2005. Data on caseload trends, state policies, and how families are faring can provide important insight into how TANF programs are working. However, work participation rates—a key accountability feature of TANF, as currently measured and reported—do not appear to be achieving the intended purpose of encouraging states to engage specified proportions of TANF adults in work activities. In addition, as cash assistance caseloads fell, many states shifted their spending away from cash assistance toward work supports such as child care, highlighting information gaps at the federal level in how many families received TANF services and how states used federal and state MOE funds to meet TANF goals. To promote TANF’s focus on work, HHS measures state performance by the proportion of TANF participants engaged in allowable work activities. States are expected to ensure that at least 50 percent of all families receiving TANF cash assistance participate in one or more of 12 categories of work activities for an average of 30 hours per week. PRWORA established penalties for states that did not meet their required work participation rates and gave HHS the authority to make determinations regarding these penalties. However, states can take advantage of program options to make it easier to meet their required rates. For example, states can annually apply to HHS for a caseload reduction credit that generally decreased the state’s required work participation rates by the same percentage that the state’s caseload decreased since a specified year, established as 1995 in PRWORA. Because of the significant drop in caseload size, many states were able to reduce their required work participation rate. In fact, 18 states reported caseload reductions of at least 50 percent in fiscal year 2006, effectively reducing their required work participation rate to zero. In addition, states can modify the calculation of their work participation rates by funding certain families with state maintenance-of-effort (MOE) dollars rather than federal TANF block grant dollars. By using state MOE dollars rather than federal dollars, states are able to remove these families from the work participation rate calculation. Between 2001 and 2006, all but two states met the participation rate requirement, according to HHS data. However, nationally, between 31 and 34 percent of families receiving cash assistance met their work requirements during this time. In 2006, DRA reauthorized the TANF block grant through fiscal year 2010 and made several modifications that were generally expected to strengthen TANF work requirements intended to help more families attain self-sufficiency, and to improve data reliability. For example, DRA modified the caseload reduction credit by changing the base year from 1995 to 2005, and it mandated that families receiving cash assistance funded with state maintenance of effort dollars be included in the calculation of the work participation rates. It also directed HHS to issue regulations defining the 12 work activities and included new requirements to better ensure the reliability of work participation rate data. We found that the proportion of families receiving TANF cash assistance that met work participation requirements has changed little since DRA was enacted and is still below the 50 percent generally specified as the required rate. In fiscal years 2007 and 2008—the two years following DRA for which national data are available—between 29 and 30 percent of families receiving TANF cash assistance met their work requirements. In numbers of families, 243,000 of 816,000 families met their work requirements in fiscal year 2008. The small decrease in the proportion of families that met their requirements after DRA may be related, in part, to the federal work activity definitions and tightened work hour reporting and verification procedures states had to comply with after the act, as well as states’ ability to make the required changes. The types of work activities in which families receiving TANF cash assistance most frequently participated were similar before and after DRA. For example, among families that met their work requirements, the majority participated in unsubsidized employment in the years both before and after DRA. In all of the years analyzed, the next most frequent work activities were job search and job readiness assistance, vocational educational training, and work experience. Although the national rate did not change significantly, fewer states met the required work participation rates after DRA, according to HHS data. As before DRA, states used a variety of options and strategies to meet their required work participation rate. For example: States continued to request caseload reduction credits to help lower their required work participation rates; however, the credits were significantly smaller after DRA, since caseloads went down less after 2005. Some states lowered their required rates by spending state MOE dollars in excess of what is required under federal law on TANF-related programs – a practice we found enabled 22 states to meet their rates in 2007 and 14 states in 2008. Total state MOE expenditures increased by almost $2 billion between fiscal years 2006 and 2008, which appears to be related to state spending on programs and services such as preventing and reducing out-of-wedlock pregnancies. Some states used policies to ensure that families complying with their individual work requirements were included in the work participation rate calculation by, for example providing monthly cash assistance to working families previously on TANF or about to lose TANF eligibility because their working incomes placed them just above eligibility thresholds. 18 states have implemented such programs since DRA. In contrast, after DRA required that state maintenance of effort dollars be included in the calculation of the work participation rates, some states removed certain nonworking families from the calculation of their rates by funding cash assistance for these families with state dollars unconnected to the TANF program – a practice we found in 29 states. We learned that states often use these state-funded programs to provide cash assistance to families that typically have the most difficulty meeting the TANF work requirements, such as families with a disabled member or recent immigrants and refugees. In short, because of the various factors that affect the calculation of states’ work participation rates, the rate’s usefulness as an indicator of a state’s effort to help participants achieve self-sufficiency is limited. Moreover, the rate does not allow for clear comparisons across state TANF programs or comparisons of individual state programs over time. This is the same conclusion we reached in our 2005 report that triggered some of the DRA changes to improve this measure of states’ performance. Further, our 2005 review before DRA changes as well as the one we just completed in May of this year indicate that the TANF work rate requirements as enacted, in combination with the flexibility provided, may not serve as an incentive for states to engage more families or to work with families with complex needs. Many states have cited challenges in meeting work performance standards under DRA, such as new requirements to verify participants’ actual activity hours and certain limitations on the types and timing of activities that count toward meeting the requirements. The TANF work rate requirements—as established in the original legislation and revised in the Deficit Reduction Act—may not yet have achieved the appropriate balance between flexibility for states and accountability for federal TANF goals. The substantial decline in traditional cash assistance caseloads combined with state spending flexibilities under the TANF block grant allowed states to broaden their use of TANF funds. As a result, TANF and MOE dollars played an increasing role in state budgets outside of traditional cash assistance payments. In our 2006 report that reviewed state budgets in nine states, we found that in the decade since Congress created TANF, the states used their federal and state TANF-related funds throughout their budgets for low-income individuals, supporting a wide range of state priorities, such as refundable state earned income credits for the working poor, prekindergarten, child welfare services, mental health, and substance abuse services, among others. While some of this spending, such as that for child care assistance, relates directly to helping cash assistance recipients leave and stay off the welfare rolls, other spending is directed to a broader population that did not necessarily ever receive welfare payments.. This is in keeping with the broad purposes of TANF specified in the law: 1. providing assistance so that children could be cared for in their own homes or in the homes of relatives; 2. ending families’ dependence on government benefits by promoting job preparation, work, and marriage; 3. preventing and reducing the incidence of out-of-wedlock pregnancies; 4. encouraging the formation and maintenance of two-parent families. More recent data indicated that this trend has continued, even under recessionary conditions. In fiscal year 2009, federal TANF and state MOE expenditures for purposes other than cash assistance totaled 70 percent of all expenditures compared with 27 percent in fiscal year 1997, when states first implemented TANF, as shown in figure 2. In addition, of the 21 states we surveyed for our February 2010 report, few reported that they had reduced federal TANF and MOE spending for other purposes, such as child care and subsidized employment programs, to offset increased expenditures for growth in their cash assistance caseloads. States that increased spending on cash assistance while maintaining or increasing spending for other purposes did so by spending reserve funds, accessing the TANF Contingency Fund, accessing the TANF Emergency Contingency Fund created by the Recovery Act, or a combination of the three. This shift in spending left gaps in the information gathered at the federal level to ensure state accountability. Because existing oversight mechanisms focus on cash assistance, which no longer accounts for the majority of TANF and MOE spending, we may be missing important information on the total numbers served and how states use TANF funds to help families to achieve program goals in ways beyond their welfare-to- work programs. For example, states have used significant portions of their TANF funds to augment their child care subsidy programs, which often serve non-TANF families, yet we do not know how many children are served or what role these subsidies play in helping low-income families avoid welfare dependency, a key TANF goal. Further, many states use TANF funds to fund a significant portion of their child welfare programs. In effect, there is little information on the numbers of people served by TANF-funded programs other than cash assistance, and there is no real measure of workload or of how services supported by TANF and MOE funds meet the goals of welfare reform. Another implication of changing caseloads relates to their changing composition, with about half of the families receiving cash assistance composed of cases with no adult receiving assistance in fiscal year 2008 compared with less than one-quarter in fiscal year 1998 (see fig. 3). There are four main categories of “child-only” cases: (1) the parent is disabled and receiving SSI; (2) the parent is a noncitizen and therefore ineligible; (3) the child is living with a nonparent relative; and (4) the parent has been sanctioned and removed from cash assistance for failing to comply with program requirements, and the family’s benefit has been correspondingly reduced. These families, with parents or guardians not receiving TANF cash assistance and generally not subject to work requirements, have not been the focus of efforts to help families achieve self-sufficiency. Nearly 15 years after the creation of TANF, the expected upcoming reauthorization of the program has brought renewed interest to efforts to assess how well the program is meeting the needs of low income families with children—most headed by women—and putting them on a path to self-sufficiency. While the dramatic decline in the TANF caseload following welfare reform and the increase in employment among single mothers has been cited as evidence for the program’s success, questions have been raised about its effect on families. Many who left the rolls transitioned to low wage, unstable jobs, and research has shown that a small subset of families who neither receive TANF nor earn income may have been left behind. Following the recent economic recession, poverty among children has climbed to its highest level in years. A central feature of the TANF block grant is the flexibility it provides to states to design and implement welfare programs tailored to address their own circumstances, but this flexibility must be balanced with mechanisms to ensure state programs are held accountable for meeting program goals. Over time we have learned that states’ success in engaging TANF cash assistance recipients in the type, hours, and levels of work activities specified in the law has, in many cases, been limited, though they have met the required targets using the flexibility allowed. Although the DRA changes to TANF work requirements were expected to strengthen the work participation rate as a performance measure and move more families toward self-sufficiency, the proportion of TANF recipients engaged in work activities remains unchanged. States’ use of the modifications allowed in federal law and regulations, as well as states’ policy choices, have diminished the rates’ usefulness as the national performance measure for TANF, and shown it to be limited as an incentive for states to engage more families or work with families with complex needs. Furthermore, while states have devoted significant amounts of the block grant funds as well as state funds to other activities, little is known about use of these funds. Lack of information on how states use these funds to aid families and to meet TANF goals hinders decision makers in considering the success of TANF and what trade offs might be involved in any changes to TANF when it is reauthorized. We provided a draft of the reports we drew on for this testimony to HHS for its review, and copies of the agency’s written responses can be found in the appendices of the relevant reports. In its comments, HHS generally said that the reports were informative and did not disagree with our findings. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions you or other Members of the Committee may have. For questions about this statement, please contact me at (202) 512-7215 or brownke@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Hedieh Rahmanou Fusfield, Rachel Frisk, Alexander G. Galuten, Gale C. Harris, Kathryn A. Larin, and Deborah A. Signer. Temporary Assistance for Needy Families: Implications of Recent Legislative and Economic Changes for State Programs and Work Participation Rates, GAO-10-525. Washington, D.C.: May 28, 2010. Temporary Assistance for Needy Families: Implications of Changes in Participation Rates, GAO-10-495T. Washington, D.C.: March 11, 2010. Temporary Assistance for Needy Families: Fewer Eligible Families Have Received Cash Assistance Since the 1990s, and the Recession’s Impact on Caseloads Varies by State, GAO-10-164. Washington, D.C.: February 23, 2010. Poverty in America: Consequences for Individuals and the Economy. GAO-07-343T. Washington, D.C.: January 24, 2007. Welfare Reform: Better Information Needed to Understand Trends in States’ Uses of the TANF Block Grant. GAO-06-414. Washington, D.C.: March 3, 2006. Welfare Reform: More Information Needed to Assess Promising Strategies to Increase Parents’ Incomes. GAO-06-108. Washington, D.C.: December 2, 2005. Welfare Reform: HHS Should Exercise Oversight to Help Ensure TANF Work Participation Is Measured Consistently across States. GAO-05-821. Washington, D.C.: August 19, 2005. TANF AND SSI: Opportunities Exist to Help People with Impairments Become More Self-Sufficient. GAO-04-878. Washington, D.C.: September 15, 2004. Welfare Reform: Information on Changing Labor Market and State Fiscal Conditions. GAO-03-977. Washington, D.C.: July 15, 2003. Welfare Reform: Former TANF Recipients with Impairments Less Likely to Be Employed and More Likely to Receive Federal Supports. GAO-03-210. Washington, D.C.: December 6, 2002. Welfare Reform: With TANF Flexibility, States Vary in How They Implement Work Requirements and Time Limits. GAO-02-770. Washington, D.C.: July 5, 2002. Welfare Reform: States Provide TANF-Funded Work Support Services to Many Low-Income Families Who Do Not Receive Cash Assistance. GAO-02-615T. Washington, D.C.: April 10, 2002. Welfare Reform: States Are Using TANF Flexibility to Adapt Work Requirements and Time Limits to Meet State and Local Needs. GAO-02-501T. Washington, D.C.: March 7, 2002. Welfare Reform: Progress in Meeting Work-Focused TANF Goals. GAO-01-522T. Washington, D.C.: March 15, 2001. Welfare Reform: Moving Hard-to-Employ Recipients into the Workforce. GAO-01-368. Washington, D.C.: March 15, 2001. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Temporary Assistance for Needy Families (TANF) program, created in 1996, is one of the key federal funding streams provided to states to assist women and children in poverty. A critical aspect of TANF has been its focus on employment and self-sufficiency, and the primary means to measure state efforts in this area has been TANF's work participation rate requirements. Legislative changes in 2005 were generally expected to strengthen these work requirements. Given changes in the number of families participating in TANF over time and questions about whether the program is achieving its goals, this testimony draws on previous GAO work to focus on 1) key changes to state welfare programs made in response to TANF and other legislation and their effect on caseload trends; 2) how low-income single-parent families are faring; and 3) how recent developments in state programs and the economy may affect federal monitoring of TANF. To address these issues, in previous work conducted from November 2008 to May 2010, GAO analyzed state data reported to the Department of Health and Human Services; used microsimulation analyses; surveyed state TANF administrators in 50 states and the District of Columbia; interviewed officials in 21 states selected to represent a range of economic conditions and TANF policy decisions; conducted site visits to Florida, Ohio, and Oregon; and reviewed relevant federal laws, regulations, and research. Changes states made to their welfare programs as they implemented TANF contributed to a significant decline in program participation, but caseloads are starting to increase in many states. The strong economy of the 1990s, TANF's focus on work, and other factors contributed to increased family incomes and a decline in the number of families poor enough to be eligible for cash assistance. However, research shows that state policies--including TANF work requirements, time limits, and sanction and diversion policies--also contributed to the caseload decline, as fewer eligible families participated in the program. In recent years, states have varied in their response to changes in economic conditions, with caseloads rising in 37 states and falling in 13 states between December 2007 and September 2009, the latest data available when we did our work. Like TANF recipients, families who left TANF, as well as those who qualified for the program but who did not participate, had low incomes and continued to rely on other government supports. In the years following welfare reform, many of the parents who left cash assistance found employment, and some were better off than they were on welfare, but earnings were typically low and many worked in unstable, low-wage jobs with few benefits. Among eligible families who did not participate, a small subset did not work and had very low incomes. Efforts to measure states' engagement of TANF recipients in work activities and to monitor states' use of all TANF funds have been of limited use in ensuring accountability for meeting federal TANF goals, according to our analysis. Work participation rates--a key performance measure for TANF, as currently measured and reported, do not appear to be achieving the intended purpose of encouraging states to engage specified proportions of TANF recipients in work activities. In addition, states' decisions to shift their spending from cash assistance to other programs and work supports such as childcare have highlighted gaps in the information available at the federal level on how many families received TANF services and how states used funds to meet TANF goals. A central feature of the TANF block grant is the flexibility it provides to states to design and implement welfare programs tailored to address their own circumstances, but this flexibility must be balanced with mechanisms to ensure state programs are held accountable for meeting program goals. The limited usefulness of current measures of work participation and the lack of information on how states use funds to aid families and to meet TANF goals hinders decision makers in considering the success of TANF and what trade offs might be involved in any changes to TANF when it is reauthorized.
The reserve components of the Army and Air Force include both the National Guard and Reserves. These components account for about 85 percent of the total reserve personnel and funding. The Navy, Marine Corps, and Coast Guard have only Reserves. Because the Coast Guard Reserve is such a small force—about 8,000 personnel in 1996—and is under the Department of Transportation, we are not including it in our discussion. Table 1 shows that all the reserve components have been reduced in size since fiscal year 1990. Except for the Marine Corps, the components are projected to be reduced even further by fiscal year 2001. Between fiscal years 1990 and 2001, the reserve components are expected to decline by slightly more than 20 percent. The Guard and Reserve comprised about 35 percent of DOD’s total military force in 1990, and they are projected to comprise about 38 percent of the force by the end of fiscal years 1996 and 2001. However, the active and reserve composition of each of the services differs considerably. For example, the Guard and Reserve are projected to comprise slightly over 50 percent of the total Army for fiscal years 1996 and 2001, but the Reserves are projected to comprise less than 20 percent of the Naval and Marine Corps total forces for the same years. According to DOD’s fiscal year 1996 budget request, the reserve components were projected to receive about 7 percent of total DOD funding for fiscal years 1996 and 2001. This percentage is slightly higher than the percentage in 1990. Table 2 shows the distribution of funds by component for fiscal years 1990, 1996, and 2001. The reserve components are expected to provide critical capabilities that are projected to be needed for two major regional conflicts, the military strategy articulated in DOD’s 1993 bottom-up review. Examples of these capabilities are as follows: The Army reserve components provide all or significant portions of many of the Army’s support functions, including 100 percent of the forces that provide fresh water supply, over 95 percent of the civil affairs units, about 85 percent of the medical brigades, about 75 percent of the chemical defense battalions, and about 70 percent of the heavy combat engineer battalions. The Air Force reserve components provide about 80 percent of aerial port units, over 60 percent of tactical airlift and air rescue and recovery units, and about 50 percent of aerial refueling units. The Naval Reserve contributes 100 percent of the heavy logistics support units, over 90 percent of the cargo handling battalions, and about 60 percent of the mobile construction battalions. The Gulf War was the first major test of the Total Force policy. Over 200,000 reservists served on active duty either voluntarily or as a result of involuntary call-up. Very few of the combat units in the reserve components were called up for the war; however, the support units were deployed extensively. According to a study by the Institute for Defense Analyses for DOD’s Commission on Roles and Missions, many reserve component combat and support units that were deployed for the war demonstrated their ability to perform to standard with little postmobilization training. However, the experience among the services was mixed, according to the study. For example, the Marine Corps called up and deployed more of its Reserve combat units than the other military services, and the units carried out their missions successfully. The Air Force deployed few of its reserve component combat forces, but the forces that were deployed demonstrated that they could perform in a war, if needed. The Army did not deploy National Guard combat brigades that were associated with active divisions because those divisions were deployed on short notice and the Army believed the brigades needed extensive postmobilization training. In a 1991 report, we stated that the three Army National Guard brigades activated for the Gulf War were inadequately prepared to be fully ready to deploy quickly. Army officials have testified that, although combat brigades were intended to participate in contingency conflicts, the envisioned conflicts were not of the immediate nature of the Gulf War. We found that when the three brigades were activated, many soldiers were not completely trained to do their jobs; many noncommissioned officers were not adequately trained in leadership skills; and Guard members had difficulty adjusting to the active Army’s administrative systems for supply and personnel management, which were different from those the Guard used in peacetime. The activation also revealed that the postmobilization training plans prepared by the three brigades during peacetime had underestimated the training that would be necessary for them to be fully combat ready. About 140,000 of the 200,000 reservists called up for the Gulf War were from the Army reserve components, and most of those individuals were in support units. We reported in 1992 and testified in 1993 that the Army had difficulty providing adequate support forces. In our testimony, we stated that the Army used a large portion of some types of support units, such as heavy and medium truck units and water supply companies, and totally exhausted its supply of other units, even though it had deployed only about one-quarter of its combat divisions. Reserve component personnel have been involved in virtually every contingency operation since the Gulf War. For example, over 1,300 Army Reserve and National Guard personnel were activated for Uphold Democracy in Haiti to replace individuals deployed from home stations, provide transportation and logistics, and bolster special operations capabilities such as civil affairs. The Air Force relied on reserve component volunteers to provide airlift, aerial refueling, and operational relief of fighter squadrons for Provide Promise and Deny Flight in Bosnia and Provide Comfort in Iraq. Marine Corps reservists provided security for refugee camps at Guantanamo Bay, and Naval reservists participated in Caribbean operations to intercept refugee vessels. Thousands of reservists have participated in recent peace operations. For example, the President, using his Selected Reserve Callup authority, authorized the activation of up to 4,300 reservists to support operations in Bosnia. As of February 22, 1996, 3,475 reservists had been mobilized, and according to DOD Reserve Affairs officials, the first reserve rotation is in place. Additionally, about 960 volunteers have been deployed. Our recent work on the use of volunteers has shown that they have had the necessary skills and qualifications to perform their jobs and have performed well. Last week we reported that the Army National Guard’s combat forces far exceed projected requirements for two major regional conflicts. Army National Guard combat forces consist of 8 divisions, 15 enhanced brigades, and 3 separate combat units. Today, about 161,000 Guard personnel are in these combat units, including about 67,000 in the 15 enhanced brigades. We stated that the Guard’s eight combat divisions and three separate units are not required to accomplish the two-conflict strategy, according to Army war planners and war planning documents that we reviewed. The Joint Chiefs of Staff have not assigned these divisions and units for use in any major regional conflict currently envisioned in DOD planning scenarios. Moreover, although the Joint Chiefs of Staff have made all 15 enhanced brigades available for war planning purposes, the planners have identified requirements for less than 10 brigades to achieve mission success in a war. According to DOD documents and Army officials, the excess forces are a strategic reserve that could be assigned missions, such as occupational forces once an enemy has been deterred and rotational forces. However, we could find no analytical basis for this level of strategic reserve. State and federal laws generally authorize the Guard to provide military support to state authorities for certain missions, such as disaster relief. Support skills, such as engineering and military police, are most often needed for state missions. The Guard primarily supplements other state resources for these missions. According to a recent study by RAND’s National Defense Research Institute, the Guard has used only a small percent of its total personnel over the last decade to meet state requirements. At the time of our review, the Army was studying alternatives to redesign the Guard’s combat structure to meet critical shortages that the Army had identified in its support capabilities. The Army’s most recent analysis projects a shortage of 60,000 support troops, primarily in transportation and quartermaster units. Furthermore, a recent Joint Chiefs of Staff exercise concluded that maintaining sufficient support forces is critical to executing the two-conflict strategy. DOD’s Commission on Roles and Missions concluded in its report that reserve component forces with lower priority tasks, such as the Guard’s eight combat divisions, should be eliminated or reorganized to fill shortfalls in higher priority areas. The Commission also reported that, even after filling the shortfalls, the total Army would still have more combat forces than required and recommended that these forces be eliminated from the active or reserve components. The end of the Cold War and budgetary pressures have provided both the opportunity and the incentive to reassess defense needs. Because the Guard’s combat forces exceed projected war requirements and the Army’s analysis indicates a shortage of support forces, we believe it is appropriate for the Army to study the conversion of some Guard combat forces to support roles. Therefore, in our recent report, we recommended that the Secretary of the Defense, in conjunction with the Secretary of the Army and the Director of the Army National Guard, validate the size and structure of all the Guard’s combat forces and that the Secretary of the Army prepare and execute a plan to bring the size and structure in line with validated requirements. We also recommended that, if the Army study suggests that some Guard combat forces should be converted to support roles, the Secretary of the Army follow through with the conversion because it would satisfy shortages in its support forces and further provide the types of forces that state governors have traditionally needed. Moreover, we recommended that the Secretary of Defense consider eliminating any Guard forces that exceed validated requirements. DOD fully concurred with our recommendations. In the aftermath of the Gulf War, the Army adopted a new training strategy that was designed to prepare combat brigades to deploy within 90 days of mobilization. The strategy refocuses peacetime training goals on proficiency at the platoon level and below, rather than up through the brigade level, for mission-essential tasks and gunnery. The strategy also includes efforts to improve individual job and leader training and implements a congressionally mandated program that assigned 5,000 active Army advisers to the brigades. In June 1995, we reported on 7 of 15 brigades that were scheduled to become enhanced brigades. We selected these seven brigades because they were roundout or roundup brigades to active component divisions and had received preference for training and resources. They had also been required to be ready to deploy at the Army’s highest readiness level within 90 days of mobilization. Therefore, their deployment criteria did not change when they became enhanced brigades. We reported on the readiness status of the seven combat brigades during 1992 through 1994, the first 3 years the new training strategy was tested, focusing on whether (1) the new strategy had enabled the brigades to meet peacetime training goals, (2) the advisers assigned to the brigades were working effectively to improve training readiness, and (3) prospects for having the brigades ready for war within 90 days were likely. For the most part, none of the brigades came close to achieving the training proficiency sought by the Army. The brigades were unable to recruit and retain enough personnel to meet staffing goals, and many personnel were not sufficiently trained in their individual job and leadership skills. Even if the brigades had made improvements in individual training, their 23-percent personnel turnover rate would quickly obliterate such gains. Collective training was also problematic. In 1993, combat platoons had mastered an average of just one-seventh of their mission-essential tasks, compared with a goal of 100 percent, and less than one-third of the battalions met gunnery goals. Although gunnery scores improved for four brigades in 1994, the brigades reported no marked improvement in the other key areas. The adviser program’s efforts to improve training readiness were limited by factors such as (1) an ambiguous definition of the advisers’ role; (2) poor communication between the active Army, advisers, brigades, and other National Guard officials, causing confusion and disagreement over training goals; and (3) difficult working relationships. The relationship between the active Army and the state-run Guard was characterized by an “us and them” environment that could undermine prospects for significant improvement in the brigades’ ability to conduct successful combat operations. We also reported that it was highly uncertain whether the Guard’s mechanized infantry and armor brigades could be ready to deploy 90 days after mobilization. Models estimated that the brigades would need between 68 and 110 days before being ready to deploy. However, these estimates assumed that the brigades’ peacetime training proficiency would improve to levels near those envisioned by the training strategy, thus shortening postmobilization training. One model, which included the possibility that the strategy’s goals would not be met, estimated that as many as 154 days would be required to prepare the brigades to deploy. In commenting on our report in April 1995, DOD generally agreed with our conclusions, however, DOD said it was too early in the implementation of the initiatives to evaluate improvement in the brigades’ readiness. In February 1996, we obtained the latest information on the enhanced brigades’ training proficiency from the Army’s U.S. Forces Command. According to Command officials, some of the same problems we identified in our report continue to exist and the enhanced brigades have not reached platoon-level proficiency. Specifically, the officials told us that the brigades experienced training difficulties during 1995, which precluded the units from being validated at platoon-level proficiency. Some of the problems that had a negative impact on unit training were (1) low attendance by personnel at annual training, (2) shortages in junior and senior enlisted personnel and officers, and (3) severe deficiencies in individual skills proficiency. For example, one brigade reported that 36 percent of its soldiers were not qualified in their individual military occupational skills. Despite the problems, Command officials said some brigades are improving, however, they have minimal data to support that position. The training situation with the enhanced brigades calls into question whether the current strategy of deploying National Guard combat brigades within 90 days is realistic. The continental air defense mission evolved during the Cold War to detect and intercept Soviet bombers attacking North America via the North Pole. This mission is carried out primarily by dedicated Air National Guard units. In his 1993 report on roles and missions, the Chairman of the Joint Chiefs of Staff had determined that the United States no longer needed a large, dedicated continental air defense force. Consequently, the Chairman recommended that the dedicated force be significantly reduced or eliminated and that existing active and reserve general purpose forces be tasked to perform the mission. The Secretary of Defense agreed with the Chairman’s recommendations and directed the Air Force to reduce the dedicated force but retain the mission primarily as an Air Force reserve component responsibility. To date, the Air Force has not aggressively implemented the Chairman’s or the Secretary of Defense’s recommendations. Rather, the Air Force continues to keep a dedicated force for the air defense mission and has reduced the force by less than 20 percent. We reported in May 1994 that a dedicated continental air defense force was no longer needed because the threat of a Soviet-style air attack against the United States had largely disappeared. As a result of the greatly reduced threat, the air defense force had been focusing its activities on air sovereignty missions. However, those missions could be performed by active and reserve general purpose and training forces because they had comparable or more capable aircraft, were located at or near most existing continental air defense bases and alert sites, and had pilots capable of performing air sovereignty missions or being trained to perform such missions. We stated that implementing the Chairman’s recommendations could result in significant savings. The amount of savings would depend on whether the dedicated air defense units were disbanded or assigned another mission. The Air Force reduced its dedicated Air National Guard force from 180 to 150 aircraft. We do not believe this reduction is in line with the Chairman’s recommendation. Moreover, we believe that retaining 150 dedicated aircraft would unnecessarily drain operation and maintenance funds. We asked the Congressional Budget Office to estimate the savings from the 1995 defense plan if all the air defense units were disbanded and their missions assigned to existing units. On the basis of a force of 150 aircraft, the office estimated a total savings of about $1.8 billion from fiscal years 1997 through 2000. Mr. Chairman, this concludes my prepared statement. I would be happy to address any questions you or other members of the subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the readiness of armed forces reserve components. GAO noted that: (1) reserve components provided crucial support and combat functions in the Persian Gulf War and in various peacekeeping operations; (2) the Army National Guard's combat forces far exceed projected force requirements for two major regional conflicts, while the Army has critical shortages in support functions; (3) none of the enhanced brigades that it reviewed achieved the training proficiency that the Army required for deployment within 90 days of mobilization; (4) active-duty advisers assigned to National Guard brigades were limited by an ambiguous definition of their role, poor management communication, and difficult working relationships; (5) it is uncertain that the Guard's mechanized infantry and armor brigades could deploy within 90 days after mobilization; (6) while it has found that a dedicated continental air defense force is no longer necessary to defend North America against a long-range air threat, the Air Force has only reduced its dedicated Air National Guard force for this mission from 180 aircraft to 150 aircraft; and (7) eliminating continental air defense units and assigning their missions to existing units could save $1.8 billion from fiscal years 1997 through 2000.
Since it started development in 2003, FCS has been at the center of the Army’s efforts to modernize into a lighter, more agile, and more capable combat force. The FCS concept involved replacing existing combat systems with a family of manned and unmanned vehicles and systems linked by an advanced information network. The Army anticipated that the FCS systems, along with the soldier and enabling complementary systems, would work together in a system of systems wherein the whole provided greater capability than the sum of the individual parts. The Army expected to develop this equipment in 10 years, procure it over 13 years, and field it to 15 FCS-unique brigades—about one-third of the active force at that time. The Army also had planned to spin out selected FCS technologies and systems to current Army forces throughout the system development and demonstration phase. As we have reported in 2009, the FCS program was immature and unable to meet DOD’s own standards for technology and design from the start. Although adjustments were made, such as adding time and reducing requirements, vehicle weights and software code grew, key network systems were delayed, and technologies took longer to mature than anticipated (see fig. 1). By 2009, after an investment of 6 years and an estimated $18 billion, the viability of the FCS concept was still unknown. As such, we concluded that the maturity of the development efforts was insufficient and the program could not be developed and produced within duced within existing resources. existing resources. In April 2009, the Secretary of Defense proposed a significant restructuring of the FCS program to lower risk and address more near-term combat needs. The Secretary noted significant concerns that the FCS program’s vehicle designs—where greater information awareness was expected to compensate for less armor, resulting in lower weight and higher fuel efficiency—did not adequately reflect the lessons of counterinsurgency and close-quarters combat operations in Iraq and Afghanistan. As such, the Secretary recommended accelerating fielding of ready-to-go systems and capabilities to all canceling the vehicle component of the FCS program, reevaluating the requirements, technology, and approach, and re-launching the Army’s vehicle modernization program, and addressing fee structure and other concerns with current FCS contracting arrangements. In June 2009, the Under Secretary of Defense for Acquisition, Technology, and Logistics issued an acquisition decision memorandum that canceled the FCS acquisition program, terminated manned ground vehicle development efforts, and laid out plans for follow-on Army brigade combat team modernization efforts. DOD directed the Army to transition to an Army-wide modernization plan consisting of a number of integrated acquisition programs, including one to develop ground combat vehicles. Subsequently, the Army has been defining its ground force modernization efforts per the Secretary’s decisions and the June 2009 acquisition decision memorandum. Although the details are not yet complete, the Army took several actions through the end of calendar year 2009. It stopped all development work on the FCS manned ground vehicles—including the non-line of sight cannon—in the summer of 2009 and recently terminated development of the Class IV unmanned aerial vehicle and the countermine and transport variants of the Multifunction Utility/Logistics and Equipment unmanned ground vehicle. For the time being, the Army is continuing selected development work under the existing FCS development contract, primarily residual FCS system and network development. In October 2009, the Army negotiated a modification to the existing contract that clarified the development work needed for the brigade modernization efforts. The Army is implementing DOD direction and redefining its overall modernization strategy as a result of the Secretary of Defense’s decisions to significantly restructure the FCS program. It is transitioning from the FCS long-term acquisition orientation to a shorter-term approach that biannually develops and fields new increments of capability within capability packages. It now has an approved acquisition program that will produce and field the initial increment of the FCS spinout equipment, which includes unmanned aerial and ground vehicles as well as unattended sensors and munitions, and preliminary plans for two other major defense acquisition programs to define and develop follow-on increments and develop a new GCV. The Army also plans to integrate network capabilities across the Army’s brigade structure and to develop and field upgrades to other existing ground force equipment. The first program, Increment 1, is a continuation of previous FCS- related efforts to spin out emerging capabilities and technologies to current forces. Of the Army’s post-FCS modernization initiatives, Increment 1, which includes such FCS remnants as unmanned air and ground systems, unattended ground sensors, the non-line-of-sight launch system, and a network integration kit, is the furthest along in the acquisition development cycle (see fig. 2). The network integration kit includes, among other things, the integrated computer system, an initial version of the system-of-systems common operating environment (SOSCOE), early models of the Joint Tactical Radio System, and a range extension relay. In December 2009, the Army requested and DOD approved, with a number of restrictions, the low- rate initial production of Increment 1 systems that are expected to be fielded in the fiscal year 2011-12 capability package. The Army will be continuing Increment 1 development over the next 2 years while low- rate initial production proceeds. The projected development and production cost to equip nine brigades with the Increment 1 network and systems, supported by an independent cost estimate, would be about $3.5 billion. Network Integration Kit (NIK) Provides enhanced communications and situational awareness through radios with multiple software waveforms, connections to unattended sensors, and links to existing networking capabilities. Provides force protection in an urban setting through a leave- behind, network-enabled reporting system of movement and/or activity in cleared areas. Provides independent, soldier-level aerial reconnaissance, surveillance, and target acquisition capability. For the time being, the Army is continuing selected development work— primarily that related to Increment 1, Increment 2, and network development—under the existing FCS development contract. In October 2009, the Army negotiated a modification to the existing contract, which clarified the development work needed for the brigade modernization efforts. The Army previously awarded a contract for long lead item procurement for Increment 1. A modification to that contract was recently issued to begin low-rate initial production of the Increment 1 systems. The Army has also recently released a request for proposals for the technology development phase of the proposed GCV development effort. Contractor proposals for GCV are expected to include plans and/or solutions for, among other things, survivability (hit avoidance system, armor, and vehicle layout) and mobility (propulsion and power generation and cooling). According to the request for proposals, the proposals can utilize prior Army investment in armor recipes, but they will not get an inherent advantage for doing so. Each solution will be based on its own merits. Contractor proposals are to be submitted in April 2010 and contract awards, for cost-plus type contracts, are to be awarded after the Milestone A decision in September 2010. The challenge facing both DOD and the Army is to set these ground force modernization efforts on the best footing possible by buying the right capabilities at the best value. In many ways, DOD and the Army have set modernization efforts on a positive course, and they have an opportunity to reduce risks by adhering to the body of acquisition legislation and policy reforms—which incorporate knowledge-based best practices we identified in our previous work—that have been introduced since FCS started in 2003. The new legislation and policy reforms emphasize a knowledge-based acquisition approach, a cumulative process in which certain knowledge is acquired by key decision points before proceeding. In essence, knowledge supplants risk over time. Additionally, DOD and the Army can further reduce risks by considering lessons learned from problems that emerged during the FCS development effort. Initial indications are that the Army is moving in that direction. However, in the first major acquisition decision for the Army’s post-FCS initiatives, DOD and the Army—because they want to support the warfighter quickly—are proceeding with low-rate initial production of one brigade set of Increment 1 systems despite having acknowledged that the systems are immature, unreliable, and cannot perform as required. The body of acquisition legislation and DOD policy reforms introduced since FCS started in 2003 incorporates nearly all of the knowledge-based practices we identified in our previous work (see table 2). For example, DOD acquisition policy includes controls to ensure that programs have demonstrated a certain level of technology maturity, design stability, and production maturity before proceeding into the next phase of the acquisition process. As such, if the Army proceeds with preliminary plans for new acquisition programs, then adherence to the acquisition direction in each of its new acquisition efforts provides an opportunity to improve the odds for successful outcomes, reduce risks for follow-on Army ground force modernization efforts, and deliver needed equipment more quickly and at lower costs. Conversely, acquisition efforts that proceed with less technology, design, and manufacturing knowledge than best practices suggest face a higher risk of cost increases and schedule delays. As shown in table 2, the cumulative building of knowledge consists of information that should be gathered at three critical points over the course of a program: Knowledge point 1 (at the program launch or Milestone B decision): Establishing a business case that balances requirements with resources. At this point, a match must be made between the customer’s needs and the developer’s available resources—technology, engineering, knowledge, time, and funding. A high level of technology maturity, demonstrated via a prototype in its intended environment, indicates whether resources and requirements match. Also, the developer completes a preliminary design of the product that shows that the design is feasible and that requirements are predictable and doable. Knowledge point 2 (at the critical design review between design integration and demonstration): Gaining design knowledge and reducing integration risk. At this point, the product design is stable because it has been demonstrated to meet the customer’s requirements as well as cost, schedule, and reliability targets. The best practice is to achieve design stability at the system-level critical design review, usually held midway through system development. Completion of at least 90 percent of engineering drawings at this point provides tangible evidence that the product’s design is stable, and a prototype demonstration shows that the design is capable of meeting performance requirements. Knowledge point 3 (at production commitment or the Milestone C decision): Achieving predictable production. This point is achieved when it has been demonstrated that the developer can manufacture the product within cost, schedule, and quality targets. The best practice is to ensure that all critical manufacturing processes are in statistical control— that is, they are repeatable, sustainable, and capable of consistently producing parts within the product’s quality tolerances and standards—at the start of production. The Army did not position the FCS program for success because it did not establish a knowledge-based acquisition approach—a strategy consistent with DOD policy and best acquisition practices—to develop FCS. The Army started the FCS program in 2003 before defining what the systems were going to be required to do and how they were going to interact. It moved ahead without determining whether the FCS concept could be developed in accordance with a sound business case. Specifically, at the FCS program’s start, the Army had not established firm requirements, mature technologies, a realistic cost estimate, or an acquisition strategy wherein knowledge drives schedule. By 2009, the Army still had not shown that emerging FCS system designs could meet requirements, that critical technologies were at minimally acceptable maturity levels, and that the acquisition strategy was executable within estimated resources. With one notable exception, there are initial indications that DOD and the Army are moving forward to implement the acquisition policy reforms as they proceed with ground force modernization, including the Secretary of Defense’s statement about the ground vehicle modernization program—to “get the acquisition right, even at the cost of delay.” In addition, DOD anticipates that the GCV program will comply with DOD acquisition policy in terms of utilizing competitive system or subsystem prototypes. According to a DOD official, a meeting was recently held to consider a materiel development decision for the GCV, and the Army is proposing to conduct a preliminary design review on GCV before its planned Milestone B decision point. Additionally, a configuration steering board is planned for later in 2010 to address reliability and military utility of infantry brigade systems. In the first major acquisition decision for the Army’s post-FCS initiatives, DOD and the Army—because they want to support the warfighter quickly—are proceeding with low-rate initial production of Increment 1 systems despite having acknowledged that systems are immature, are unreliable, and cannot perform as required. In December 2009, the Under Secretary of Defense for Acquisition, Technology, and Logistics approved low-rate initial production of Increment 1 equipment for one infantry brigade but noted that there is an aggressive risk reduction plan to grow and demonstrate the network maturity and reliability to support continued Increment 1 production and fielding. In the associated acquisition decision memorandum, the Under Secretary acknowledged the risks of pursuing Increment 1 production, including early network immaturity; lack of a clear operational perspective of the early network’s value; and large reliability shortfalls of the network, systems, and sensors. The Under Secretary also said that he was aware of the importance of fielding systems to the current warfighter and that the flexibility to deploy components as available would allow DOD to “best support” the Secretary of Defense’s direction to “win the wars we are in.” Because of that, the Under Secretary specified that a number of actions be taken over the next year or more and directed the Army to work toward having all components for the program fielded as soon as possible and to deploy components of the program as they are ready. However, the Under Secretary did not specify the improvements that the Army needed to make or that those improvements are a prerequisite for approving additional production lots of Increment 1. The approval for low-rate initial production is at variance with DOD policy and Army expectations. DOD’s current acquisition policy requires that systems be demonstrated in their intended environments using the selected production-representative articles before the production decision occurs. However, the testing that formed the basis for the Increment 1 production decision included surrogates and non-production- representative systems, including the communications radios. As we have previously noted, testing with surrogates and non-production- representative systems is problematic because it does not conclusively show how well the systems can address current force capability gaps. Furthermore, Increment 1 systems—which are slated for a fiscal year 2011-12 fielding—do not yet meet the Army’s expectations that new capabilities would be tested and their performance validated before being deployed in a capability package. As noted in 2009 test results, system performance and reliability during testing was marginal at best. For example, the demonstrated reliability of the Class I unmanned aerial vehicle was about 5 hours between failure, compared to a requirement for 23 hours between failure. The Army asserts that Increment 1 systems’ maturity will improve rapidly but admits that it will be a “steep climb” and not a low-risk effort. While the Under Secretary took current warfighter needs into account in his decision to approve Increment 1 low-rate initial production, it is questionable whether the equipment can meet one of the main principles underpinning knowledge-based acquisition—whether the warfighter needs can best be met with the chosen concept. Test reports from late 2009 showed conclusively that the systems had limited performance, and that this reduced the test unit’s ability to assess and refine tactics, techniques, and procedures associated with employment of the equipment. The Director, Operational Test and Evaluation, recently reported that none of the Increment 1 systems have demonstrated an adequate level of performance to be fielded to units and employed in combat. Specifically, the report noted that reliability is poor and falls short of the level expected of an acquisition system at this stage of development. Shortfalls in meeting reliability requirements may adversely affect Increment 1’s overall operational effectiveness and suitability and may increase life-cycle costs. In addition, in its 2009 assessment of the increment’s limited user test—the last test before the production decision was made—the Army’s Test and Evaluation Command indicated that the Increment 1 systems would be challenged to meet warfighter needs. It concluded that, with the exception of the Non-Line-of-Sight Launch System, which had not yet undergone flight testing, all the systems were considered operationally effective and survivable, but with limitations, because they were immature and had entered the test as pre-production representative systems and/or pre- engineering design models. Additionally, the Command noted that these same systems were not operationally suitable because they did not meet required reliability expectations. Army and DOD officials made a very difficult decision when they canceled what was the centerpiece of Army modernization—the FCS program. As they transition away from the FCS concept, both the Army and DOD have an opportunity to improve the likely outcomes for the Army’s ground force modernization initiatives by adhering closely to recently enacted acquisition reforms and by seeking to avoid the numerous acquisition pitfalls that plagued FCS. As DOD and the Army proceed with these significant financial investments, they should keep in mind the Secretary of Defense’s admonition about the new ground vehicle modernization program: “get the acquisition right, even at the cost of delay.” Based on the preliminary plans, we see a number of good features such as the Army’s decision to pursue an incremental acquisition approach for its post-FCS efforts. However, it is vitally important that each of those incremental efforts adheres to knowledge-based acquisition principles and strikes a balance between what is needed, how fast it can be fielded, and how much it will cost. Moreover, the acquisition community needs to be held accountable for expected results, and DOD and the Army must not be willing to accept whatever results are delivered regardless of military utility. We are concerned that in their desire for speedy delivery of emerging equipment to our warfighters in the field, DOD and the Army did not strike the right balance in prematurely approving low-rate initial production of Increment 1 of brigade modernization. Although the Army will argue that it needs to field these capabilities as soon as possible, none of these systems have been designated as urgent and it is not helpful to provide early capability to the warfighter if those capabilities are not technically mature and reliable. If the Army moves forward too fast with immature Increment 1 designs, then that could cause additional delays as the Army and its contractors concurrently address technology, design, and production issues. Production and fielding is not the appropriate phase of acquisition to be working on such basic design issues. In our upcoming report, we will make recommendations intended to reduce the risk of proceeding into production with immature technologies. In that regard, we will recommend that the Secretary of Defense mandate that the Army correct the identified maturity and reliability issues with the Increment 1 network and systems prior to approving any additional lots of the Increment 1 network and systems for production. Specifically, the Army should ensure that the network and the individual systems have been independently assessed as fully mature, meet reliability goals, and have been demonstrated to perform as expected using production- representative prototypes. We will also recommend that the Secretary of the Army should not allow fielding of the Increment 1 network or any of the Increment 1 systems until the identified maturity and reliability issues have been corrected. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions you or members of the subcommittee may have. For future questions about this statement, please contact me on (202) 512-4841 or sullivanm@gao.gov. Individuals making key contributions to this statement include William R. Graveline, Assistant Director; William C. Allbritton; Andrea M. Bivens; Noah B. Bleicher; Tana M. Davis; Marcus C. Ferguson; and Robert S. Swierczek. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since 2003, the Future Combat System (FCS) program has been the centerpiece of the Army's efforts to transition to a lighter, more agile, and more capable combat force. In 2009, however, concerns over the program's performance led to the Secretary of Defense's decision to significantly restructure and ultimately cancel the acquisition program. As a result, the Army is outlining a new approach to ground force modernization. This statement outlines the Army's preliminary post-FCS actions and identifies the challenges DOD and the Army must address as they proceed. This testimony is based on GAO's report on the Army's Ground Force Modernization effort scheduled for release March 15, 2010. It emphasizes the December 2009 decision to begin low-rate initial production for Increment 1 of the Brigade Combat Team Modernization. The Army is implementing DOD direction and redefining its overall modernization strategy as a result of the Secretary of Defense's decision to significantly restructure the FCS program. It is transitioning from the FCS long-term acquisition orientation to a shorter-term approach that biannually develops and fields new increments of capability within capability packages. It now has an approved acquisition program that will produce and field the initial increment of the FCS spinout equipment, which includes unmanned aerial and ground vehicles as well as unattended sensors and munitions. It has preliminary plans for two other major defense acquisition programs to (1) define and develop follow-on increments and (2) develop a new Ground Combat Vehicle (GCV). The individual systems within Increments 1 and 2 are to be integrated with a preliminary version of an information network. Currently, the Army is continuing selected development work--primarily that related to Increments 1 and 2, and the information network--under the existing FCS development contract. The Army has recently released a request for proposals for the technology development phase of the proposed GCV development effort. The Army's projected investment in Increments 1 and 2 and GCV is estimated to be over $24 billion through fiscal year 2015. With these modernization efforts at an early stage, DOD and the Army face the immediate challenge of setting themon the best possible footing by buying the right capabilities at the best value. DOD and the Army have an opportunity to better position these efforts by utilizing an enhanced body of acquisition legislation and DOD policy reforms--which now incorporate many of the knowledge-based practices that GAO has previously identified--as well as lessons learned from the FCS program. Preliminary plans suggest the Army and DOD are strongly considering lessons learned. However, DOD recently approved the first of several planned low-rate initial production lots of Increment 1 despite having acknowledged that the systems and network were immature, unreliable, and not performing as required. That decision reflects DOD's emphasis on providing new capabilities quickly to combat units. This decision did not follow knowledge-based acquisition practices and runs the risk of delivering unacceptable equipment to the warfighter and trading off acquisition principles whose validity has been so recently underscored. The Army needs to seize the opportunity of integrating acquisition reforms, knowledge-based acquisition practices, and lessons-learned from FCS into future modernization efforts to increase the likelihood of successful outcomes.
We found that as of September 30, 2011, more than $794 million in undisbursed balances remained in PMS in 10,548 expired grant accounts. These are accounts that were more than 3 months past the grant end date, had no activity for 9 months or more, and therefore should be considered for grant closeout. This is an improvement from 2008, when we reported that at the end of calendar year 2006, roughly $1 billion in undisbursed funding remained in expired PMS grant accounts. These expired grant accounts do not include accounts associated with grant programs for which the duration of the grant is not limited to a specific time period, such as payments to states for the Medical Assistance Program, known as Medicaid, and Temporary Assistance for Needy Families. This improvement is notable given that the overall amount of grant disbursements through PMS increased by about 23 percent from 2006 to 2011. However, more work needs to be done to further improve the timeliness of grant closeout and reduce undisbursed balances. We have highlighted three areas in need of particular attention. First, we found that undisbursed balances remained in grant accounts several years past their expiration date. We found that 991 expired grant accounts containing a total of $110.9 million in undisbursed funding were more than 5 years past the grant end date at the end of fiscal year 2011. Of these, 115 expired grant accounts containing roughly $9.5 million in undisbursed funding remained open more than 10 years past the grant end date. Federal regulations generally require that grantees retain financial records and other documents pertinent to a grant for a period of 3 years from the date of submission of the final report. Over time, the risk increases that grantees will not have retained the financial documents and other grant information that federal agencies need to properly reconcile financial information and make the necessary adjustments to grant award amounts and amounts of federal funds paid to recipients. This could potentially result in the payment of unnecessary and unallowable costs. Second, we found that a small percentage of grant accounts (a little more than 1 percent) with undisbursed balances of $1 million or more accounted for more than a third of the total undisbursed funds in expired grant accounts. Overall, 123 accounts from eight different federal agencies had more than $1 million in undisbursed balances at the end of fiscal year 2011 for a combined total of roughly $316 million in undisbursed balances. Accounts with undisbursed balances remaining after the grant end date can indicate a potential grant management problem. Data showing that some grantees have not expended large amounts of funding, such as $1 million or more, by the specified grant end date raise concern that the grantees may not have fully met the program objectives for the intended beneficiaries within the agreed-upon time frames. Third, we found more than 28,000 expired grant accounts in PMS with no undisbursed balances remaining that had not been closed out as of the end of fiscal year 2011. According to data provided by PSC, PMS users were charged a total of roughly $173,000 per month to maintain the more than 28,000 expired grant accounts with zero-dollar balances listed on the year-end closeout report. This would represent roughly $2 million in fees if agencies were billed for these accounts for the entire year. While the fees are small relative to the size of the original grant awards, they can accumulate over time. If the grant has otherwise been administratively and financially closed out, then agencies are paying fees to maintain grant accounts that are no longer needed. However, the presence of expired grant accounts with no undisbursed funds remaining raises concerns that administrative and financial closeout—the final point of accountability for these grants, which includes such important tasks as the submission of financial and performance reports—may not have been completed. In addition to data from PMS, we also reviewed data from the ASAP system and found that as of September 30, 2011, $126.2 million in undisbursed balances remained in 1,094 dormant grant accounts. Agencies can use the information in these reports to help identify accounts in need of attention and unspent funds available for deobligation. For example, agencies may want to focus attention on accounts where there has been no activity for a prolonged period. We found roughly $11 million in 179 accounts that had been inactive for 5 years or more. We have found that when agencies made concerted efforts to address timely grant closeout, they, their inspectors general, and auditors reported that they were able to improve the timeliness of grant closeouts and decrease the amount of undisbursed funding in expired grant accounts. Agencies’ approaches generally focused on elevating timely grant closeouts to a higher agency management priority and on improving overall closeout processing. For example, in response to past audit reports, HHS officials reported increasing monitoring of grant closeout. Since fiscal year 2006, the HHS independent auditor had routinely reported on concerns with management controls over grant closeout, including a backlog of HHS grant accounts in PMS that were already beyond what the auditor considered a reasonable time frame for closeout. In fiscal year 2011, the independent auditor noted significant improvements in the closeout of grants in PMS. While we found that roughly three-fourths of all undisbursed balances in expired PMS grant accounts were from grants issued by HHS, we also found that the total undisbursed balances in these accounts represented the lowest percentage (2.7 percent) for any federal department included on the In comments on our draft report, September 30, 2011, closeout report.HHS reported that it had identified $116 million in undisbursed balances in PMS available for deobligation through a special initiative begun in 2011 and is updating existing department policies and procedures to improve the grant closeout process going forward. In 2008, we recommended that OMB instruct all executive departments and independent agencies to annually track the amount of undisbursed balances in expired grant accounts and report on the status and resolution of the undisbursed funding in their annual performance reports. At the time, OMB supported the intent of our recommendations, but its comments did not indicate a commitment to implement our recommendations. Starting in 2010, OMB has issued guidance to track and report on undisbursed balances in expired grant accounts to only certain federal departments and entities covered by the Commerce, Justice, Science, and Related Agencies Appropriations Act, as required by law. However, in its instructions, OMB equated “expired grant accounts” with expired appropriation accounts. Based on this definition, OMB’s guidance included grant accounts that were still available for disbursement and was not limited only to those grant accounts eligible for closeout. In our review of CFO Act agencies’ annual performance reports for fiscal years 2009 to 2011, we found that systematic, agencywide information on undisbursed balances in grant accounts eligible for closeout was largely lacking. In our 2012 grant closeout report, we reiterate our recommendation that OMB instruct all executive departments and independent agencies to report on the status and resolution of the undisbursed funding in grants that have reached the grant end date in their annual performance reports, the actions taken to resolve the undisbursed funding, and the outcomes associated with these actions. In addition, we recommend that the Director of OMB take the following three actions: Revise the definition of “undisbursed balances in expired grant accounts” in future guidance issued to agencies to focus on undisbursed balances obligated to grant agreements that have reached the grant end date and are eligible for closeout. Instruct agencies with undisbursed balances still obligated to grants several years past their grant end date to develop and implement strategies to quickly and efficiently take action to close out these grants and return unspent funds to the Treasury when appropriate. Instruct agencies with expired grant accounts in federal payment systems with no undisbursed balances remaining to develop and implement procedures to annually identify and close out these accounts to ensure that all closeout requirements have been met and to minimize any potential fees for accounts with no balances. OMB staff said that they generally agreed with the recommendations and will consider them as they review and streamline grant policy guidance. OMB did not provide specific actions or time frames with which it would address the issues that we have raised. We will continue to monitor OMB’s action on our recommendations. The challenge presented by undisbursed balances in expired grant accounts is just one of a number of grants management challenges we have identified in our past work. Grants continue to be an important tool used by the federal government to achieve national objectives. As the federal government confronts long-term and growing fiscal challenges, its ability to maintain the flow of intergovernmental revenue, such as through grant programs, could be constrained. To make the best use of federal grant funds, it is critical to address grants management challenges that could impact the efficiency and effectiveness of federal grants processes. Accordingly, the Subcommittee has requested that we examine a number of areas involving these issues in future work. However, before I discuss these I would like to put them in a broader context by briefly describing the level of recent federal grant spending and how it has changed over the last three decades. Grants have been, and continue to be, an important tool used by the federal government to provide program funding to state and local governments. According to OMB, federal outlays for grants to state and local governments increased from $91 billion in fiscal year 1980 (about $221 billion in 2011 constant dollars) to over $606 billion in fiscal year 2011. Although many federal departments and agencies award grants, HHS, which administers the Medicaid program, is by far the largest grant- making agency, with grants outlays of almost $348 billion in fiscal year 2011, or about 57 percent of the total federal grants outlays that year. Even when Medicaid’s outlays of $275 billion are excluded, HHS remains the largest federal grant-making department.federal outlays for grants to state and local governments over the period from fiscal years 1980 to 2011, in constant dollars, and the increasing amount of this total that went to Medicaid over time. Given the federal government’s use of grants to achieve national objectives and respond to emerging trends, this Subcommittee has recently requested that we conduct a number of grant-related reviews in support of its oversight efforts. Today, I would like to briefly highlight four areas where our previous work and that of the inspectors general and others have identified challenges, and where we are beginning the work you requested related to the management of grant programs. Specifically, they are the streamlining of grants management processes; the measurement of grant performance; grant lessons learned from implementing the American Recovery and Reinvestment Act of 2009 (Recovery Act); and internal control weaknesses in grants management processes. For more than a decade, the federal government has undertaken several initiatives aimed at streamlining governmentwide grants management. Over the years, Congress has expressed concern over the inconsistencies and weaknesses we and the inspectors general have found in grants management and oversight. In response to your request we plan to examine the progress OMB and federal grant governance bodies have made toward streamlining grants management. We also expect to assess what further actions should be taken to simplify processes, reduce unnecessary burdens, and improve the governance of streamlining initiatives. We plan to report our results next year. We also expect to evaluate the extent to which there are governmentwide requirements for measuring and reporting grant performance and the extent to which federal agencies measure grant performance to report progress toward their goals, as well as offer assistance to grantees on collecting data and reporting grant performance. As with our streamlining work, the specifics of this grant performance reporting work are currently under development, and we anticipate a 2013 report. In our past work we have reported that effective performance accountability provisions are of fundamental importance in assuring the proper and effective use of federal funds to achieve program goals. Under the Recovery Act, grants have played an important role in distributing federal funds in light of the most serious economic crisis since the Great Depression. As of June 2012, Treasury had paid out more than $250 billion in Recovery Act funds to state and local governments, much of it through grants. Given the significant investment made in the Recovery Act, and the considerable challenges facing our nation moving forward, this Subcommittee recognized the importance of collecting, analyzing, and sharing grant lessons and insights gained as a result of this process. Building on our previous reviews, we will examine lessons from the implementation of the Recovery Act—including specific examples of practices and approaches that worked as well as challenges encountered by federal, state, and local agencies. Among the potential issues to consider are the efforts to facilitate coordination and collaboration among federal, state, local, and nongovernmental partners and actions taken to enhance the organizational and administrative capacity of federal partners. Once again, we anticipate reporting to the Subcommittee next year. Finally, in numerous reviews over the years, we have identified weaknesses in federal agencies’ processes for managing and overseeing grant programs. Among the issues we are planning to address in future work is how federal agencies can improve internal control over grants monitoring. We will also examine what improvements, if any, are needed in federal agencies’ internal controls to help ensure the primary grantees are providing adequate oversight of subgrantees. The improvements made in the timeliness of grant closeouts since our 2008 report demonstrate that congressional oversight can lead agencies to focus attention on a specific grant challenge, and result in real progress. However, our recent update of our earlier analysis of undisbursed balances also shows that more still needs to be done to close out grants; agencies would use their resources most effectively by focusing initially on older accounts with larger undisbursed balances. As our review of past grant work suggests, there are numerous other issues where congressional attention could also likely pay dividends. This is all the more relevant because federal grant programs remain important tools to achieve national objectives and continue to be a significant component of federal spending. We look forward to continuing to support this Subcommittee’s efforts to examine the design and implementation of federal grants and participating in its active oversight agenda. Chairman Carper, Ranking Member Brown, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you or other members of the Subcommittee may have. If you or your staff have any questions about this testimony, please contact me at (202) 512-6806 or czerwinskis@gao.gov, or Beryl H. Davis, Director, at (202) 512-2623 or davisbh@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Phyllis L. Anderson, Assistant Director; Peter Del Toro, Assistant Director; Thomas M. James, Assistant Director; Kimberly A. McGatlin, Assistant Director; Laura M. Bednar, Maria C. Belaval, Anthony M. Bova, Amy R. Bowser, Virginia A. Chanley, Melissa L. King, Thomas J. McCabe, Diane N. Morris, and Omari A. Norman. Additional contributions were made by Andrew Y. Ching, Travis P. Hill, Jason S. Kirwan, Jennifer K. Leone, Cynthia M. Saunders, Albert C. Sim, and Michael Springer. While there can be substantial variation among grant programs, figure 1 illustrates how closing out grants could allow an agency to redirect resources toward other projects and activities or return unspent funds to Treasury.
As the federal government confronts long-term fiscal challenges, it is critical to improve the efficiency of federal grants processes, such as grant closeout procedures that allow for the return of unspent balances to the Treasury. In 2008, GAO reported that about $1 billion in undisbursed funding remained in expired grant accounts in the largest civilian payment system for grants—PMS. For this statement, GAO provides information from its April 2012 report updating its 2008 analysis. GAO also describes federal grant spending over the last three decades and discusses other grant management challenges identified in its past work and that of others. This testimony addresses (1) the amount of undisbursed funding remaining in expired grant accounts; (2) actions OMB and agencies have taken to track undisbursed balances; (3) GAO recommendations to improve grant closeout; (4) recent and historical funding levels for federal grants; and (5) GAO’s ongoing and future work on grants management issues. Closeout is an important final point of grants accountability. It helps to ensure that grantees have met all financial and reporting requirements. It also allows federal agencies to identify and redirect unused funds to other projects and priorities as authorized or to return unspent balances to the Department of the Treasury (Treasury). At the end of fiscal year 2011, GAO identified more than $794 million in funding remaining in expired grant accounts (accounts that were more than 3 months past the grant end date and had no activity for 9 months or more) in the Payment Management System (PMS). GAO found that undisbursed balances remained in some grant accounts several years past their expiration date: $110.9 million in undisbursed funding remained unspent more than 5 years past the grant end date, including $9.5 million that remained unspent for 10 years or more. Nevertheless, the more than $794 million in undisbursed balances remaining in PMS represents an improvement in closing out expired grant accounts with undisbursed balances in PMS compared to the approximately $1 billion GAO found in 2008. This improvement is notable given that the overall amount of grant disbursements through PMS increased by about 23 percent from 2006 to 2011. When agencies made concerted efforts to address timely grant closeout, they and their inspectors general and auditors reported that they were able to improve the timeliness of grant closeouts and decrease the amount of undisbursed funding in expired grant accounts. GAO found that raising the visibility of the problem within federal agencies can also lead to improvements in grant closeouts. However, GAO’s review of agencies’ annual performance reports for fiscal years 2009 to 2011 found that systematic, agencywide information on undisbursed balances in grant accounts eligible for closeout is still largely lacking. The challenge presented by undisbursed balances in expired grant accounts is just one of a number of grants management challenges identified in past GAO work. Addressing these challenges is critical to increasing the efficient and effective use of federal grant funds, which represent a significant component of overall federal spending. According to the Office of Management and Budget (OMB), federal outlays for grants to state and local governments, including Medicaid, increased from $91 billion in fiscal year 1980 (about $221 billion in 2011 constant dollars) to more than $606 billion in fiscal year 2011, accounting for approximately 17 percent of total federal outlays. During this 30-year period there has been a shift in grant spending, increasing the percentage of grant funding of Medicaid while decreasing the percentage of funding of non-Medicaid-related grant programs. GAO work on grants over the last decade has identified a range of issues related to the management of grant programs, including the streamlining of grants management processes, the measurement of grant performance, grant lessons learned from implementing the American Recovery and Reinvestment Act of 2009, and internal control weaknesses. GAO will be looking at each of these grants management issue areas in future work for this Subcommittee. For grant closeout, GAO’s April 2012 report recommended OMB revise future guidance to better target undisbursed balances and instruct agencies to take action to close out grants that are past their end date or have no undisbursed balances remaining. OMB staff said they generally agreed with and will consider the recommendations.
Established in 1965, HUD is the principal federal agency responsible for programs in four areas—housing assistance, community development, housing finance, and regulatory issues related to areas such as lead-based paint abatement and fair housing. To carry out its many responsibilities, HUD was staffed by 9,386 employees as of February 1999. Housing Assistance: HUD provides (1) public housing assistance through allocations to public housing authorities and (2) private-market housing assistance under section 8 of the U. S. Housing Act of 1937 for properties—referred to as project-based assistance—or for tenants—known as tenant-based assistance. In contrast to entitlement programs, which provide benefits to all who qualify, the benefits of HUD’s housing assistance programs are limited by budgetary constraints to only about one-fourth of those who are eligible. Community Development: Primarily through grants to the states, large metropolitan areas, small cities, towns, and counties, HUD provides community planning and development funds for local economic development under its Community Development Block Grant (CDBG) and Empowerment Zone/Enterprise Community Programs (EZ/EC), housing development under its HOME Program, and assistance to the homeless under its McKinney Act Homeless Programs. The funding for some programs, such as those for the homeless, may also be distributed directly to nonprofit groups and organizations. Housing Finance: The Federal Housing Administration (FHA) insures lenders—including mortgage banks, commercial banks, savings banks, and savings and loan associations—against losses on mortgages for single-family properties, multifamily properties, and other facilities. The Government National Mortgage Association, a government-owned corporation within HUD, guarantees investors the timely payment of principal and interest on securities issued by lenders of FHA-insured and VA- and Rural Housing Service-guaranteed loans. Regulatory Issues: HUD is responsible for regulating interstate land sales, home mortgage settlement services, manufactured housing, lead-based paint abatement, and home mortgage disclosures. HUD also supports fair housing programs and is partially responsible for enforcing federal fair housing laws. The Congress supports HUD’s programs through annual appropriations that are subject to spending limits under the Budget Enforcement Act, as amended. For fiscal year 2000, HUD is proposing a total budget of about $28 billion in new discretionary budget authority, which, in combination with available budget authority from prior years, will help support about $34 billion in discretionary outlays. This request represents a 9-percent increase in budget authority over fiscal year 1999. In its Fiscal Year 2000 Budget Summary, HUD states that its proposed budget will allow the renewal of all Section 8 rental assistance contracts, increases to virtually all program areas, and continued increases to programs, such as CDBG and Homeless, that address communities’ worst case needs. The summary also states that many program enhancements will be initiated, and, as we discuss below, HUD proposes to fund many set-asides within existing programs. HUD’s fiscal year 2000 budget request includes 19 new initiatives and programs that were not funded during fiscal year 1999. Some, however, may have been funded in prior years. These fall under various programs, including Community Development and Planning, Public and Indian Housing, and Housing Programs. This request includes seven set-asides totaling $210 million. Five of the set-asides ($60 million) will be funded within the CDBG Program and two ($150 million) in the HOME Program. See appendix I for a list of the proposed fiscal year 2000 initiatives and their status in fiscal year 1999. We also note that HUD’s fiscal year 2000 request includes significant funding increases in several ongoing programs, including Section 8 contract renewals. See appendix II for a list of these programs. While the budget impact—a net increase of about 9 percent in new budget authority—of the new programs and increases to existing programs that HUD proposes is not overwhelming, the proposed budget does raise questions about HUD’s capacity to manage such an increase. Questions arise for two reasons: First, HUD is currently going through a significant, complex, and time-consuming organizational reform in which many functions that it once managed in many field offices will be managed in one or more “centers” in various parts of the country. This reform is necessary to improve the efficiency and effectiveness of HUD’s operations and to address long-standing yet basic problems in program management. To accommodate this reform, HUD is moving and retraining many of its staff. Second, new initiatives and programs require a certain amount of dedicated resources to plan, implement, and manage over the long term. It is questionable whether these resources are available at this point in the reinvention of HUD. Therefore, we are concerned about whether HUD has the capacity to effectively initiate and oversee the set of new programs it is proposing for fiscal year 2000 while it is also trying to develop for itself a new operating style and way of doing business. One of the largest program increases in HUD’s fiscal year 2000 budget proposal is in its Section 8 housing assistance program (see app. II). For the past few years, we have reviewed the accuracy of HUD’s budget proposals for the tenant-based and project-based components of this program and have found many inconsistencies. For example, in July 1998, we reported that the Department had not identified all available Section 8 project-based unexpended balances and accounted for them in its budget process. As a result, HUD requested $1.3 billion in its fiscal year 1999 request for project-based funding that it did not need to cover shortfalls in current contracts. To remedy such overstatements, we recommended that HUD’s future funding requests for the Section 8 program—both the tenant-based and the project-based components—fully consider unexpended balances that may be available to offset funding needs. HUD has improved its annual review of unexpended balances. Although HUD’s budget justification shows that funding needs to cover contract shortfalls will be met by existing unexpended balances, it does not identify the estimated funding shortfall or the amount of unexpended balances available in each of the project- and tenant-based components. As a result, we cannot assess the extent to which the Department’s budget request includes the use of unexpended project-based balances. Therefore, we have requested information from HUD on its shortfall estimates and on the unexpended balances that may be available to fund these shortfalls. Balances in excess of those needed to fund shortfalls could be used to offset HUD’s request for contract renewal funding. HUD’s fiscal year 2000 budget justification raises other issues about its Section 8 program request that we believe warrant review. These issues include the basis for the contract renewal costs for the Section 8 project-based program for fiscal year 2000—more than $3 billion—as well as the basis for renewal costs beyond 2000. The budget proposal shows that HUD’s estimates of the unit costs of some project-based housing are substantially higher than HUD projected just a year ago. Moreover, unlike prior years, HUD’s fiscal year 2000 budget does not provide estimates of Section 8 costs in the years following 2000. Therefore, we have requested information that would support HUD’s assumptions and source data for both the number of units and average unit costs for this program in fiscal year 2000 and for several years thereafter. We also believe that the basis for the substantial increase in total Section 8 project-based and tenant-based outlays—$2.5 billion—should be examined, as well as HUD’s rationale for the $4.2 billion advance appropriation for fiscal year 2001 requested in the fiscal year 2000 budget request. HUD’s CDBG Program provides communities with grants for activities that will benefit low- and moderate-income people, prevent or eliminate slum or blight, or meet urgent community need. While CDBG is largely allocated on a formula basis, funds are also set aside for specific purposes such as Community Outreach Partnership, Hispanic Serving Institutions, and Historically Black Colleges and Universities. HUD’s fiscal year 2000 budget request for the CDBG Program proposes set-asides for 10 projects or initiatives totaling about $428 million. Of the 10 set-asides, half are for new initiatives totaling about $60 million. These new set-asides include Metropolitan Job Links, Homeownership Zones, EZ/EC Technical Assistance, EZ Round II Planning and Implementation, and a Citizens Volunteer Housing Corps. The CDBG Program is HUD’s most flexible tool for assisting communities to meet local development priorities. To help monitor it and other formula grant programs like HOME and Housing Opportunities for Persons With AIDS, HUD developed the Integrated Disbursement and Information System (IDIS) to provide current information on how grantees are using federal funds and what they are achieving with those funds. However, our recent work shows that IDIS, as implemented, does not provide detailed performance information. Also, because of its design, the information in IDIS is incomplete, inaccurate and untimely. Many states are apprehensive about using the problem-plagued system and plan to adopt it only if forced to do so by law. To broaden IDIS, HUD plans a replacement system, called the Departmental Grants Management System that HUD plans to design to track every grant. However, HUD plans to convert the current version of IDIS for use in the new grants management system, which may occur over the next several years. Also of immediate concern is the fact that IDIS is not secure, which opens up the possibility of unapproved access to program funds. Because of the poor quality of information in IDIS and a replacement system not being readily available, we are concerned that the activities and projects under CDBG may not be sufficiently reported and considered for budget request offsets. This is of particular concern because past budget requests show that actual CDBG unobligated balances have been increasing at a rate well over $50 million annually since fiscal year 1996. Moreover, in 1998, the authority to use about $7.6 million in CDBG funds expired. Although a reasonable explanation for this expiration may exist, we would not expect funds to expire without benefiting grantees, given the flexibility for the uses of CDBG funds and the discretion grantees have for their use. Contract Administration is a new initiative in fiscal year 2000 under HUD’s Housing Certificate Fund. HUD is requesting $209 million for this program, of which $42 million will be available to contractors who have not formerly participated in this activity. According to HUD, the use of contract administrators to manage project-based Section 8 housing assistance contracts will relieve HUD field staff of many duties they currently perform in this regard, allowing them to concentrate on their direct responsibilities, such as monitoring program effectiveness and ensuring that property owners are accountable for the rental subsidy payments they receive. Duties to be shifted to the new contract administrators include conducting annual physical inspections of the properties, reviewing project financial statements, and verifying tenants’ income and eligibility for program rental assistance benefits. HUD’s Section 8 Financial Management Center would oversee the work of contract administrators, and the Department would select contract administrators through a competitive procurement process. However, because of the documented weaknesses in HUD’s contracting practices in other areas, we question whether HUD is prepared to administer a new contracting initiative of this size. We, HUD’s Inspector General, and the National Academy of Public Administration have cited weaknesses in HUD’s contracting and procurement practices: inadequate oversight of contracted services because of a lack of skilled, trained staff; workload imbalances; and unclear duties, time frames, costs, and products. In addition, the Department has been under an investigation by its Inspector General for allegations of improper contract solicitation and administration of its contracts in the Department’s Note Sales program. Therefore, we believe that to ensure the success of HUD’s contracting for the new Section 8 contract administration initiative, HUD may need to provide some assurances to the Congress that the Department will have an adequate administrative structure and sufficient staffing in place to provide proper oversight of a new contracting program of this magnitude. HUD is also proposing an increase in its EZ Program. HUD’s $150 million request for Urban Empowerment Zones includes $45 million that would be distributed to the 15 communities that were designated as Strategic Planning Communities. These communities, which submitted applications for Round II EZ designation but were not chosen, could use the funds to support activities proposed in their EZ applications. Eligible activities include those covered by HUD’s CDBG and the Social Services Block Grant Program administered by the Department of Health and Human Services. However, under CDBG, HUD has already included a $10 million set-aside for meritorious communities that applied for Round II EZ designation but were not chosen. It is unclear why HUD needs to fund the same communities with two different programs. We provided a draft of this statement to HUD for its review and comment. Departmental officials, including HUD’s Chief Financial Officer, provided comments on several issues, including the number of programs or new initiatives that we listed and categorized as new for fiscal year 2000. HUD officials stated that programs that were funded in the past, such as Section 8 vouchers, should not be considered new, although they meet our criterion of not receiving funding in fiscal year 1999. We have included these programs because our purpose in listing new programs and initiatives is to provide an indication of the additional workload HUD may have in the approaching year. We believe that a 1 or more year break in a program’s funding can create administrative workload, even though the Department retains programmatic expertise among its staff and contractors. HUD officials also suggested that we check some of the budget figures that we reported in the statement. We did so and made adjustments where necessary. This concludes my prepared testimony, Mr. Chairman. I would be happy to respond to any questions that you or the Members of the Subcommittee might have. (Table notes on next page) For this table, GAO defined new programs and initiatives as any program or initiative that the Congress did not fund in fiscal year 1999. However, some of these programs or initiatives may have been funded in prior years. FHA Mutual Mortgage Insurance and Cooperative Management Housing Insurance Funds program account Mortgage Insurance Limitation in FHA’s Mutual Mortgage Insurance and Cooperative Management Housing Insurance Funds 105 (Table notes on next page) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the Department of Housing Urban Development's (HUD) fiscal year (FY) 2000 budget request, focusing on: (1) new initiatives or significant increases proposed by HUD; and (2) observations about HUD's request for funding related to several areas GAO has reported on in the past year. GAO noted that: (1) to support 19 new programs and initiatives, HUD is requesting nearly $731 million of its $28 billion total request for FY 2000; (2) in each case, Congress did not provide funding for the activity in FY 1999, although in some cases the program has been funded in prior years; (3) GAO is concerned about HUD's overall capacity to plan for, administer, and oversee this many new programs, particularly when HUD itself is undergoing significant organizational reform and when some of the new initiatives are in areas, such as contracting, that HUD's performance has been questioned in the past; (4) one of the most significant increases in HUD's current programs for FY 2000 is a $1 billion increase in its Section 8 rental housing assistance program; (5) however, the budget does not provide sufficient information to evaluate this request; (6) GAO believes a number of associated issues exist that warrant review; (7) HUD's tracking and oversight of its Community Development and Planning grants are made difficult because information in its grants management information system is unreliable; (8) although HUD plans to replace the current system for managing and tracking Community Development Block Grants, a new system is several years away from implementation; (9) in the meantime, HUD's FY 2000 budget request proposes to continue adding set-asides to the block grant; (10) however, HUD cannot be assured that financial tracking of the individual grants and grantees will be adequate; (11) in one of its largest new initiatives, HUD is requesting over $200 million in FY 2000 to fund contract administrators for the contracts it has with owners of multifamily properties in HUD's project-based Section 8 housing assistance program; (12) however, work that GAO, HUD's Inspector General, and the National Academy of Public Administration have done in the past on HUD's contracting activities identified weaknesses in HUD's ability to administer contracts and monitor contractors' performance; (13) however, GAO believes that the success of this program will depend on the adequacy of HUD's contract selection, administration, and oversight of these contracts; (14) HUD is proposing both a new initiative and a program increase in the area of empowerment zones as well as two set-asides in the Community Development Block Grant Program for empowerment zones; and (15) these proposals raise questions about how the programs will coordinate with and benefit from each other because they target similar beneficiaries.